Resolving duplicate mutex lock in Go - multithreading

I have a bunch of functions in a Go program that work on a struct that uses a mutex to manage concurrent access to its functions.
Some of these functions, that operate on specific data, need locks and thereby use mutex.Lock() to get hold of a mutex that manages the access to that data. Today I encountered an issue when two of these locking methods call each other. As soon as mutex.Lock() is called a second time it blocks - of course.
The problem I am facing is very similar to this code: http://play.golang.org/p/rPARZsordI
Is there any best-practise in Go on how to solve this issue? As far as I know recursive locks are not available in Go.

It seems like a design flaw of your system. You should factor out the part that you need both locked and unlocked. E.g. if what you do is
func (t *Thing) A() { t.Lock(); defer t.Unlock(); t.foo(); t.B() }
func (t *Thing) B() { t.Lock(); defer t.Unlock(); t.bar() }
then what you should do instead is
func (t *Thing) A() { t.Lock(); defer t.Unlock(); t.foo(); t.b() }
func (t *Thing) B() { t.Lock(); defer t.Unlock(); t.b() }
func (t *Thing) b() { t.bar() }

Related

Deallocating resources through move semantics

I came across codebase, where "moving ownership" through move semantic is used very frequently to deallocate resources. Example:
void process_and_free(Foo&& arg) {
auto local_arg = std::move(arg);
... // Use local_arg
}
void caller() {
Foo foo;
process_and_free(std::move(foo));
...
}
If there are some movable/dynamic resources in Foo after calling process_and_free, they will be deallocated (they will be moved to local_arg, which runs out of scope inside process_and_free). So far so good...
However, I was wondering what happens, if local_arg is not created:
void process(Foo&& arg) {
... // Use arg everywhere
}
Will the resources in Foo be deallocated in this case as well?
Second question: do you think, is it a good SWE style to use processing methods to deallocate dynamic resources of the passed argument? I was reading that the move semantics does not guarantee moving of dynamic resources: supposedly they can be moved but don't have to. Hence, IMHO it may be ambiguous in which state foo is after process_and_free call...
I think for code reader/reviewer it may be more obvious, if the resources are deallocated inside the caller:
void caller() {
{
Foo foo;
process_and_free(foo);
}
...
}
Answering both questions:
Without creating the local_arg variable, the resources will not be moved, i.e. the Foo&& arg argument does not take ownership of the resources during process(std::move(foo)) call.
Because the resources are not moved in the second example, IMHO it looks to me as a bad and confusing coding style. The reason is that looking only at the call process(std::move(foo)), the reader doesn't know what happens with foo without reviewing also process (btw. if process is a member method, it can be const with the same effect):
void process(Foo&& arg) const {
auto local_arg = std::move(arg);
...
}

Kotlin Concurrency: Any standard function to run code in a Lock?

I've been searching for a function that takes an object of type Lock
and runs a block of code with that lock taking care of locking and also unlocking.
I'd implement it as follows:
fun <T : Lock> T.runLocked(block: () -> Unit) {
lock()
try {
block()
} finally {
unlock()
}
}
Used like this:
val l = ReentrantLock()
l.runLocked {
println(l.isLocked)
}
println(l.isLocked)
//true
//false
Anything available like this? I could only find the synchronized function which cannot be used like this.
You are looking for withLock, which has the exact implementation you've written yourself, except it has a generic parameter for the result of the block instead of the receiver type.
You can find other concurrency related methods of the standard library here, in the kotlin.concurrent package.

What is the optimal way to wait for multiple futures in C++11?

I am looking for the optimal way in terms of execution time, to wait for independent futures to finish.
Dealing with only two futures is simple, one can have the optimal way as follows:
auto f1 = async(launch::async, []{ doSomething(’.’); });
auto f2 = async(launch::async, []{ doSomething(’+’); });
while (f1.wait_for(chrono::seconds(0)) != future_status::ready && f2.wait_for(chrono::seconds(0)) != future_status::ready)
{ };
f1.get();
f2.get();
This way, we leave the loop while with at least one of the futures is finished, then calling .get() for both won't make the program looses time.
How about n futures?
If you want to create an arbitrary number of std::futures, it might help to put them all into a std::vector. You can then loop through the vector and get the results. Note that get handles waiting.
//Result type defined here: http://en.cppreference.com/w/cpp/thread/async
template<typename F, typename... Args>
using AsyncResult = std::result_of_t<std::decay_t<F>(std::decay_t<Args>...)>;
template<typename T, typename F1, typename F2>
void do_par(const std::vector<T>& ts, F1&& on_t, F2&& accumulate) {
//The standard doesn't require that these futures correspond to individual
//threads, but there's a good chance they'll be implemented that way.
std::vector<std::future<AsyncResult<F1, T>>> threads;
for (const T& t : ts) { threads.push_back(std::async(on_t, t)); }
for (auto& future : threads) { accumulate(std::move(future)); }
}
template<typename T, typename F>
std::vector<AsyncResult<F, T>> map_par(const std::vector<T>& ts, F&& on_t) {
std::vector<AsyncResult<F, T>> out;
do_par(ts, on_t, [&](auto&& future_){
out.push_back(future_.get()); //Think of this as just waiting on each thread to finish.
});
return out;
}
std::string doSomething(const std::string&){ return std::string("yo"); }
And then you can do
const std::vector<std::string> results = map_par(
std::vector<std::string>{".", "+", "et al"}, doSomething
);
This simple map_par function isn't quite the savviest solution. It might help to set up a thread queue (which itself would own a thread pool) to cut out the overhead of spawning individual threads, as well as the context-switching overhead which comes into play when you have more threads than CPU cores. Your thread queue implementation might want to have its own async method, akin to std::async.
If you want to use results immediately when they come in, regardless of input order, consider setting up a single-reader-multiple-writer (which incidentally also involves a queue).
std::condition_variable helps with both of the above suggestions.

golang threading model comparison

I have a piece of data
type data struct {
// all good data here
...
}
This data is owned by a manager and used by other threads for reading only. The manager needs to periodically update the data. How do I design the threading model for this? I can think of two options:
1.
type manager struct {
// acquire read lock when other threads read the data.
// acquire write lock when manager wants to update.
lock sync.RWMutex
// a pointer holding a pointer to the data
p *data
}
2.
type manager struct {
// copy the pointer when other threads want to use the data.
// When manager updates, just change p to point to the new data.
p *data
}
Does the second approach work? It seems I don't need any lock. If other threads get a pointer pointing to the old data, it would be fine if manager updates the original pointer. As GoLang will do GC, after all other threads read the old data it will be auto released. Am I correct?
Your first option is fine and perhaps simplest to do. However, it could lead to poor performance with many readers as it could struggle to obtain a write lock.
As the comments on your question have stated, your second option (as-is) can cause a race condition and lead to unpredictable behaviour.
You could implement your second option by using atomic.Value. This would allow you to store the pointer to some data struct and atomically update this for the next readers to use. For example:
// Data shared with readers
type data struct {
// all the fields
}
// Manager
type manager struct {
v atomic.Value
}
// Method used by readers to obtain a fresh copy of data to
// work with, e.g. inside loop
func (m *manager) Data() *data {
return m.v.Load().(*data)
}
// Internal method called to set new data for readers
func (m *manager) update() {
d:=&data{
// ... set values here
}
m.v.Store(d)
}

Add 'TimeOut' parameter to 'Func<>' in C# 4.0

Using C# 4.0 features I want a generic wrapper for encapsulating functions and add a TimeOut parameter to them.
For example we have a function like:
T DoLengthyOperation()
Using Func we have:
Func<T>
This is good and call the function even Sync (Invloke) or Async(BeginInvoke).
Now think of a TimeOut to be added to this behavior and if DoLengthyOperation() returns in specified time we have true returned, otherwise false.
Something like:
FuncTimeOut<in T1, in T2, ..., out TResult, int timeOut, bool result>
Implement C# Generic Timeout
Don't return true/false for complete. Throw an exception.
I don't have time to implement it, but it should be possible and your basic signature would look like this:
T DoLengthyOperation<T>(int TimeoutInMilliseconds, Func<T> operation)
And you could call this method either by passing in the name of any Func<T> as an argument or define it place as a lambda expression. Unfortunately, you'll also need to provide an overload for different kind of function you want, as there's currently no way to specify a variable number a generic type arguments.
Instead of mixing out and bool I would instead construct a separate type to capture the return. For example
struct Result<T> {
private bool _isSuccess;
private T _value;
public bool IsSucces { get { return _success; } }
public T Value { get { return _value; } }
public Result(T value) {
_value = value;
_isSuccess = true;
}
}
This is definitely possible to write. The only problem is that in order to implement a timeout, it's necessary to do one of the following
Move the long running operation onto another thread.
Add cancellation support to the long running operation and signal cancellation from another thread.
Ingrain the notion of timeout into the operation itself and have it check for the time being expired at many points in the operation.
Which is best for you is hard to determine because we don't know enough about your scenario. My instinct though would be to go for #2 or #3. Having the primary code not have to switch threads is likely the least impactful change to your code.

Resources