Is Kotlin ?.let thread-safe?
Let's say a variable can be changed in different thread.
Is using a?.let { /* */ } thread-safe? If it's equal to if (a != null) { block() } can it happen that in if it's not null and in block it's already null?
a?.let { block() } is indeed equivalent to if (a != null) block().
This also means that if a is a mutable variable, then:
If a is a mutable variable, it might be reassigned after the null check and hold a null value at some point during block() execution;
All concurrency-related effects are in power, and proper synchronization is required if a is shared between threads to avoid a race condition, if block() accesses a again;
However, as let { ... } actually passes its receiver as the single argument to the function it takes, it can be used to capture the value of a and use it inside the lambda instead of accessing the property again in the block(). For example:
a?.let { notNullA -> block(notNullA) }
// with implicit parameter `it`, this is equivalent to:
a?.let { block(it) }
Here, the value of a passed as the argument into the lambda is guaranteed to be the same value that was checked for null. However, observing a again in the block() might return a null or a different value, and observing the mutable state of the given instance should also be properly synchronized.
Related
I want to find all methods that can explicitly return null.
Is this possible in NDepend using CQL?
Not for now, CQL so far doesn't know about value of variables, fields and values returned.
However this default rule below is proposed. The idea is that if a method returns a reference it should never be null, and a contract should be added to assert this. If you wish such a method to return null, instead use the Try... pattern, like in TryParse(string s, out T val):bool.
// <Name>Public methods returning a reference needs a contract to ensure that a non-null reference is returned</Name>
warnif count > 0
let ensureMethods = Application.Methods.WithFullName(
"System.Diagnostics.Contracts.__ContractsRuntime.Ensures(Boolean,String,String)")
from ensureMethod in ensureMethods
from m in ensureMethod.ParentAssembly.ChildMethods where
m.IsPubliclyVisible &&
!m.IsAbstract &&
m.ReturnType != null &&
// Identify that the return type is a reference type
(m.ReturnType.IsClass || m.ReturnType.IsInterface) &&
!m.IsUsing(ensureMethod) &&
// Don't match method not implemented yet!
!m.CreateA("System.NotImplementedException".AllowNoMatch())
select new {
m,
ReturnTypeReference = m.ReturnType
}
//<Description>
// **Code Contracts** are useful to decrease ambiguity between callers and callees.
// Not ensuring that a reference returned by a method is *non-null* leaves ambiguity
// for the caller. This rule matches methods returning an instance of a reference type
// (class or interface) that don't use a **Contract.Ensure()** method.
//
// *Contract.Ensure()* is defined in the **Microsoft Code Contracts for .NET**
// library, and is typically used to write a code contract on returned reference:
// *Contract.Ensures(Contract.Result<ReturnType>() != null, "returned reference is not null");*
// https://visualstudiogallery.msdn.microsoft.com/1ec7db13-3363-46c9-851f-1ce455f66970
//</Description>
//<HowToFix>
// Use *Microsoft Code Contracts for .NET* on the public surface of your API,
// to remove most ambiguity presented to your client. Most of such ambiguities
// are about *null* or *not null* references.
//
// Don't use *null* reference if you need to express that a method might not
// return a result. Use instead the **TryXXX()** pattern exposed for example
// in the *System.Int32.TryParse()* method.
//</HowToFix>
Consider an operation with a standard asynchronous interface:
std::future<void> op();
Internally, op needs to perform a (variable) number of asynchronous operations to complete; the number of these operations is finite but unbounded, and depends on the results of the previous asynchronous operations.
Here's a (bad) attempt:
/* An object of this class will store the shared execution state in the members;
* the asynchronous op is its member. */
class shared
{
private:
// shared state
private:
// Actually does some operation (asynchronously).
void do_op()
{
...
// Might need to launch more ops.
if(...)
launch_next_ops();
}
public:
// Launches next ops
void launch_next_ops()
{
...
std::async(&shared::do_op, this);
}
}
std::future<void> op()
{
shared s;
s.launch_next_ops();
// Return some future of s used for the entire operation.
...
// s destructed - delayed BOOM!
};
The problem, of course, is that s goes out of scope, so later methods will not work.
To amend this, here are the changes:
class shared : public std::enable_shared_from_this<shared>
{
private:
/* The member now takes a shared pointer to itself; hopefully
* this will keep it alive. */
void do_op(std::shared_ptr<shared> p); // [*]
void launch_next_ops()
{
...
std::async(&shared::do_op, this, shared_from_this());
}
}
std::future<void> op()
{
std::shared_ptr<shared> s{new shared{}};
s->launch_next_ops();
...
};
(Asides from the weirdness of an object calling its method with a shared pointer to itself, )the problem is with the line marked [*]. The compiler (correctly) warns that it's an unused variable.
Of course, it's possible to fool it somehow, but is this an indication of a fundamental problem? Is there any chance the compiler will optimize away the argument and leave the method with a dead object? Is there a better alternative to this entire scheme? I don't find the resulting code the most intuitive.
No, the compiler will not optimize away the argument. Indeed, that's irrelevant as the lifetime extension comes from shared_from_this() being bound by decay-copy ([thread.decaycopy]) into the result of the call to std::async ([futures.async]/3).
If you want to avoid the warning of an unused argument, just leave it unnamed; compilers that warn on unused arguments will not warn on unused unnamed arguments.
An alternative is to make do_op static, meaning that you have to use its shared_ptr argument; this also addresses the duplication between this and shared_from_this. Since this is fairly cumbersome, you might want to use a lambda to convert shared_from_this to a this pointer:
std::async([](std::shared_ptr<shared> const& self){ self->do_op(); }, shared_from_this());
If you can use C++14 init-captures this becomes even simpler:
std::async([self = shared_from_this()]{ self->do_op(); });
I have the code below but I'm getting ConcurrentModificationException, how should I avoid this issue? (I have to use WeakHashMap for some reason)
WeakHashMap<String, Object> data = new WeakHashMap<String, Object>();
// some initialization code for data
for (String key : data.keySet()) {
if (data.get(key) != null && data.get(key).equals(value)) {
//do something to modify the key
}
}
The Javadoc for WeakHashMap class explains why this would happen:
Map invariants do not hold for this class. Because the garbage
collector may discard keys at any time, a WeakHashMap may behave as
though an unknown thread is silently removing entries
Furthermore, the iterator generated under the hood by the enhanced for-loop you're using is of fail-fast type as per quoted explanation in that javadoc.
The iterators returned by the iterator method of the collections
returned by all of this class's "collection view methods" are
fail-fast: if the map is structurally modified at any time after the
iterator is created, in any way except through the iterator's own
remove method, the iterator will throw a
ConcurrentModificationException. Thus, in the face of concurrent
modification, the iterator fails quickly and cleanly, rather than
risking arbitrary, non-deterministic behavior at an undetermined time
in the future.
Therefore your loop can throw this exception for these reasons:
Garbage collector has removed an object in the keyset.
Something outside the code added an object to that map.
A modification occurred inside the loop.
As your intent appears to be processing the objects that are not GC'd yet, I would suggest using an iterator as follows:
Iterator<String> it = data.keySet().iterator();
int count = 0;
int maxTries = 3;
while(true) {
try {
while (it.hasNext()) {
String str = it.next();
// do something
}
break;
} catch (ConcurrentModificationException e) {
it = data.keySet().iterator(); // get a new iterator
if (++count == maxTries) throw e;
}
}
You can clone the key set first, but note that you hold the strong reference after that:
Set<KeyType> keys;
while(true) {
try {
keys = new HashSet<>(weakHashMap.keySet());
break;
} catch (ConcurrentModificationException ignore) {
}
}
for (KeyType key : keys) {
// ...
}
WeakHashMap's entries are automatically removed when no ordinary use of the key is realized anymore, this may happens in a different thread. While cloning the keySet() into a different Set a concurrent Thread may remove entries meanwhile, in this case a ConcurrentModificationException will 100% be thrown! You must synchronize the cloning.
Example:
Collections.synchronizedMap(data);
Please understand that
Collections.synchronizedSet(data.keySet());
Can not be used because data.keySet() rely on data's instance who is not synchronized here! More detail: synchronize(keySet) prevents the execution of methods on the keySet but keySet's remove-method is never called but WeakHashMap's remove-method is called so you have to synchronize over WeakHashMap!
Probably because your // do something in the iteration is actually modifying the underlying collection.
From ConcurrentModificationException:
For example, if a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will throw this exception.
And from (Weak)HashMap's keySet():
Returns a Set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
1) std::call_once
A a;
std::once_flag once;
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
}
2) function-level static
A a;
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
For your example usage, hmjd's answer fully explains that there is no difference (except for the additional global once_flag object needed in the call_once case.) However, the call_once case is more flexible, since the once_flag object isn't tied to a single scope. As an example, it could be a class member and be used by more than one function:
class X {
std::once_flag once;
void doSomething() {
std::call_once(once, []{ /* init ...*/ });
// ...
}
void doSomethingElse() {
std::call_once(once, []{ /*alternative init ...*/ });
// ...
}
};
Now depending on which member function is called first the initialization code can be different (but the object will still only be initialized once.)
So for simple cases a local static works nicely (if supported by your compiler) but there are some less common uses that might be easier to implement with call_once.
Both code snippets have the same behaviour, even in the presence of exceptions thrown during initialization.
This conclusion is based on (my interpretation of) the following quotes from the c++11 standard (draft n3337):
1 Section 6.7 Declaration statement clause 4 states:
The zero-initialization (8.5) of all block-scope variables with static storage duration (3.7.1) or thread storage duration (3.7.2) is performed before any other initialization takes place. Constant initialization (3.6.2) of a block-scope entity with static storage duration, if applicable, is performed before its block is first entered. An implementation is permitted to perform early initialization of other block-scope variables with static or thread storage duration under the same conditions that an implementation is permitted to statically initialize a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.88 If control re-enters the declaration recursively while the variable is being initialized, the behavior is undefined.
This means that in:
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
b is guaranteed to be initialized once only, meaning the lambda is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
2 Section 30.4.4.2 Function call-once states:
An execution of call_once that does not call its func is a passive execution. An execution of call_once that calls its func is an active execution. An active execution shall call INVOKE (DECAY_COPY ( std::forward(func)), DECAY_COPY (std::forward(args))...). If such a call to func throws an exception the execution is exceptional, otherwise it is returning. An exceptional execution shall propagate the exception to the caller of call_once. Among all executions of call_once for any given once_flag: at most one shall be a returning execution; if there is a returning execution, it shall be the last active execution; and there are passive executions only if there is a returning execution.
This means that in:
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
the lambda argument to std::call_once is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
In both cases a = A{...}; is executed (successfully) once only.
Using C# 4.0 features I want a generic wrapper for encapsulating functions and add a TimeOut parameter to them.
For example we have a function like:
T DoLengthyOperation()
Using Func we have:
Func<T>
This is good and call the function even Sync (Invloke) or Async(BeginInvoke).
Now think of a TimeOut to be added to this behavior and if DoLengthyOperation() returns in specified time we have true returned, otherwise false.
Something like:
FuncTimeOut<in T1, in T2, ..., out TResult, int timeOut, bool result>
Implement C# Generic Timeout
Don't return true/false for complete. Throw an exception.
I don't have time to implement it, but it should be possible and your basic signature would look like this:
T DoLengthyOperation<T>(int TimeoutInMilliseconds, Func<T> operation)
And you could call this method either by passing in the name of any Func<T> as an argument or define it place as a lambda expression. Unfortunately, you'll also need to provide an overload for different kind of function you want, as there's currently no way to specify a variable number a generic type arguments.
Instead of mixing out and bool I would instead construct a separate type to capture the return. For example
struct Result<T> {
private bool _isSuccess;
private T _value;
public bool IsSucces { get { return _success; } }
public T Value { get { return _value; } }
public Result(T value) {
_value = value;
_isSuccess = true;
}
}
This is definitely possible to write. The only problem is that in order to implement a timeout, it's necessary to do one of the following
Move the long running operation onto another thread.
Add cancellation support to the long running operation and signal cancellation from another thread.
Ingrain the notion of timeout into the operation itself and have it check for the time being expired at many points in the operation.
Which is best for you is hard to determine because we don't know enough about your scenario. My instinct though would be to go for #2 or #3. Having the primary code not have to switch threads is likely the least impactful change to your code.