How to map Sonar rule set to equivalent Parasoft rule set? - rule

I used to use the following Parasoft rule sets for SCA:
JTest Rule ID and Rule Description:
BD.PB.ARRAY-1 (Avoid accessing arrays out of bounds)
BD.EXCEPT.NP-1 (Avoid Null Pointer Exception)
CDD.DUPC-2 (Avoid code duplication)
MOBILE.J2ME.OOME-3 (Catch 'OutOfMemoryError' for large array allocations)
BD.RES.LEAKS-1 (Ensure resources are deallocated)
BD.PB.ZERO-1 (Avoid division by zero)
UC.UPC-2 (Avoid unused "private" classes or interfaces)
UC.DEAD-1 (Avoid dead stores (variable never used)
OPT.CEL-3 (Do not call methods in loop condition statements)
BD.PB.DEREF-2 (Avoid dereferencing before checking for null)
DotTest ID and Rule Description:
OOM.CYCLO - 2 (Follow the limit for Cyclomatic Complexity)
METRICS.CBO-1 (Follow the limit for Coupling between objects)
METRICS.MI-1 (Follow the limit for Maintainability Index (70))
CS.EXCEPT.RETHROW – 2 (Avoid clearing stack trace while rethrowing exceptions)
IFD.SRII – 4 (Implement IDisposable in types which are using system resources)
IFD.SRIF - 1 (Provide finalizers in types which use resources)
CS.CDD.DUPC – 2 (Avoid code duplication)
CS.CDD.DUPM-2 (Avoid method duplication)
OPU.IGHWE – 1 (Override the GetHashCode method whenever you override the Equals method)
Now I am looking for if SonarQube rule set include the above Parasoft rule set or not. But it is hard for me to tell which one in SonarQube rule sets is equivalent to the above Parasoft rule sets. Does anyone know?
Thanks
June

Related

Clojure atomic derive and reset of an atom

I wrote a function to get the old value of an atom while putting a new value, all in one atomic operation:
(defn get-and-reset! [at newval]
"Resets atom to newval and returns the old value. Atomic."
(let [tmp (atom [])]
(swap! at #(do (reset! tmp %) newval))
#tmp))
The documentation says the swap! function shouldn't have side effects because it can be called multiple times. That alone doesn't seem like a problem since tmp never leaves the function and it's the last value that it gets reset! to that matters. The function seems to work but I haven't tested it thoroughly with multiple threads, etc. Is this local side-effect a safe exception to the documentation, or am I missing some other subtle problem?
Yes, that will work with the current implementation of atoms in Clojure, and is (almost) guaranteed to work by contract.
The key here is that atoms are sychronous. Therefore, the inner swap! is guaranteed to complete before the outer swap!. Since tmp is only used locally, from a single thread, the inner swap! is also guaranteed not to conflict with a swap! (of tmp) on another thread.
While the outer swap! (i.e., swap! of at) could conflict with other threads, this swap! will retry when a conflict is detected. Since swap! is synchronous, these retries will occur serially w.r.t. the thread the swap! is invoked on. I suppose it's conceivable this last condition does not necessarily hold. E.g., it would be possible for an implementation of atoms to perform the swap! on a different thread, and issue retries as soon as a conflict is detected (without waiting for previous tries to finish). However, that's not the way atoms are currently implemented, and (in my opinion) doesn't seem like a very likely way to implement atoms.
If this weakness bothers you, you can use compare-and-set! instead:
(defn get-and-reset! [at newval]
"Resets atom to newval and returns the old value. Atomic."
(loop [oldval #at]
(if (compare-and-set! at oldval newval)
;; then (no conflict => return oldval)
oldval
;; else (conflict => retry)
(recur #at))))
atom's cannot do what you are trying to do.
Atoms are only defined for uncoordinated syncronous updates of a single identity. for instance the functions used to update atoms may run many times so whatever you do with that value may happen many time for each value that makes it into the atom.
Agents are often a better choice for this sort of thing because If you send an action to an agent it will run at most once:
"At any point in time, at most one action for each Agent is being executed.
Actions dispatched to an agent from another single agent or thread will occur in the order they were sent"
Another option is to add a watch to the agent or atom and have that watch react to each change after it happens. If you can convince your self that neither of these cases work for you then you may have found one of the cases where coordinated change are actually required and then refs would be the better tool, though this is rare. Usually agents or atoms with watches cover most situations.

Is optimistic synchronization wait-free for adds, removes, and contains?

If you scroll one page down from the page 205 of book "The Art of Multiprocessor Programming" (Elsevier, 2012 ISBN 9780123977953), to page 206 (Section 9.6 Optimistic Synchronization):https://books.google.com/... you'll see the add/remove/contains methods for optimistic synchronization (Figure 9.11 The OptimisticList class: the add() method traverses the list ignoring locks, aquires locks, and validates before adding the new node. Figure 9.12 The OptimisticList class: the remove() method traverses ignoring locks, acquires locks, and validates before removing the node. page copy).
In the following section on lazy synchronization, it goes on to state (while referring to optimistic synchronization)
The next step is to refine this algorithm so that contains() calls are wait-free, and add() and remove() methods, while still blocking, traverse the list only once
This seems to be saying that the contains method isn't wait free, and thus neither would the add or remove methods be. But I can't seem to see why that would be the case.
Lazy synchronization is based on optimistic synchronization. On lazy synchronization you traverse the list only once, not acquiring any locks, unlike e.g. hand-over-hand locking. When you have reached your destination for remove/add/contains you need to lock the current and predecessor node.
The big difference is that when you remove a node, you first have to mark it as deleted and then physically delete it (garbage collector).
Why is contains wait-free?
Unlike optimistic synchronization, we don't need to lock the current node. Recall that we lock the current node so that another thread can't delete it while we are returning true.
Because the current node would have been marked, we can simply check, if the current node is marked and has the desired key. No need for any locks. This makes it wait-free.
A sample code could look like this:
public boolean contains(T item) {
int key = item.hashCode();
Node curr = this.head;
while (curr.key < key) {
curr = curr.next;
}
return curr.key == key && !curr.marked;
}

Delphi threading - which parts of code need to be protected/synchronized?

so far I thought that any operation done on "shared" object (common for multiple threads) must be protected with "synchronize", no matter what. Apparently, I was wrong - in the code I'm studying recently there are plenty of classes (thread-safe ones, as the Author claims) and only one of them uses Critical Section for almost every method.
How do I find what parts / methods of my code needs to be protected with CriticalSection (or any other method) and which not?
So far I haven't stumbled upon any interesting explanation / article / blog note, all google results are:
a) examples of synchronization between thread and the GUI. From simple progressbar to most complex, but still the lesson is obvious: each time you access / modify the property of GUI component, do that in "Synchronize". But nothing more.
b) articles explaining Critical Sections, Mutexes etc. Just a different approaches of protection/synchronization.
c) Examples of very very simple thread-safe classes (thread safe stack or list) - they all do the same - implement lock / unlock methods which do enter/leave critical section and return the actual stack/list pointer on locking.
Now I'm looking for explanation which parts of code should be protected.
could be in form of code ;) but please don't provide me with one more "using Synchronize to update progressbar" ... ;)
thank you!
You are asking for specific answers to a very general question.
Basically, apart of UI operations, you should protect every shared memory/resource access to avoid two potentially competing threads to:
read inconsistent memory
write memory at the same time
try to use the same resource at the same time from more than one thread... until the resource is thread-safe.
Generally, I consider any other operation thread safe, including operations that access not shared memory or not shared objects.
For example, consider this object:
type
TThrdExample = class
private
FValue: Integer;
public
procedure Inc;
procedure Dec;
function Value: Integer;
procedure ThreadInc;
procedure ThreadDec;
function ThreadValue: Integer;
end;
ThreadVar
ThreadValue: Integer;
Inc, Dec and Value are methods which operate over FValue field. The methods are not thread safe until you protect them with some synchronization mechanism. It can be a MultipleReaderExclusiveWriterSinchronizer for Value function and CriticalSection for Inc and Dec methods.
ThreadInc and ThreadDec methods operate over ThreadValue variable, which is defined as ThreadVar, so I consider it ThreadSafe because the memory they access is not shared between threads... each call from different thread will access different memory address.
If you know that, by design, a class should be used only in one thread or inside other synchronization mechanisms, you're free to consider that thread safe by design.
If you want more specific answers, I suggest you try with a more specific question.
Best regards.
EDIT: Maybe someone say the integer fields is a bad example because you can consider integer operations atomic on Intel/Windows thus is not needed to protect it... but I hope you get the idea.
You misunderstood TThread.Synchronize method.
TThread.Synchronize and TThread.Queue methods executes protected code in the context of main (GUI) thread. That is why you should use Syncronize or Queue to update GUI controls (like progressbar) - normally only main thread should access GUI controls.
Critical Sections are different - the protected code is executed in the context of the thread that acquired critical section, and no other thread is permitted to acquire the critical section until the former thread releases it.
You use critical section in case there's a need for a certain set of objects to be updated atomically. This means, they must at all times be either already updated completely or not yet updated at all. They must never be accessible in a transitional state.
For example, with a simple integer reading/writing this is not the case. The operation of reading integer as well as the operation of writing it are atomic already: you cannot read integer in the middle of processor writing it, half-updated. It's either old value or new value, always.
But if you want to increment the integer atomically, you have not one, but three operations you have to do at once: read the old value into processor's cache, increment it, and write it back to memory. Each operation is atomic, but the three of them together are not.
One thread might read the old value (say, 200), increment it by 5 in cache, and at the same time another thread might read the value too (still 200). Then the first thread writes back 205, while the second thread increments its cached value of 200 to 203 and writes back 203, overwriting 205. The result of two increments (+5 and +3) should be 208, but it's 203 due to non-atomicity of operations.
So, you use critical sections when:
A variable, set of variables, or any resource is used from several threads and needs to be updated atomically.
It's not atomic by itself (for example, calling a function which is guarded by critical section inside of the function body, is an atomic operation already)
Have a read of this documentation
http://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/ToC.html
If you use messaging to communicate between threads then you can basically ignore synchronisation primitives completely because each thread only accesses its internal structures and the messages themselves. In essence this is far easier and more scalable architecture than using synchronisation primitives.

Clojure mutable storage types

I'm attempting to learn Clojure from the API and documentation available on the site. I'm a bit unclear about mutable storage in Clojure and I want to make sure my understanding is correct. Please let me know if there are any ideas that I've gotten wrong.
Edit: I'm updating this as I receive comments on its correctness.
Disclaimer: All of this information is informal and potentially wrong. Do not use this post for gaining an understanding of how Clojure works.
Vars always contain a root binding and possibly a per-thread binding. They are comparable to regular variables in imperative languages and are not suited for sharing information between threads. (thanks Arthur Ulfeldt)
Refs are locations shared between threads that support atomic transactions that can change the state of any number of refs in a single transaction. Transactions are committed upon exiting sync expressions (dosync) and conflicts are resolved automatically with STM magic (rollbacks, queues, waits, etc.)
Agents are locations that enable information to be asynchronously shared between threads with minimal overhead by dispatching independent action functions to change the agent's state. Agents are returned immediately and are therefore non-blocking, although an agent's value isn't set until a dispatched function has completed.
Atoms are locations that can be synchronously shared between threads. They support safe manipulation between different threads.
Here's my friendly summary based on when to use these structures:
Vars are like regular old variables in imperative languages. (avoid when possible)
Atoms are like Vars but with thread-sharing safety that allows for immediate reading and safe setting. (thanks Martin)
An Agent is like an Atom but rather than blocking it spawns a new thread to calculate its value, only blocks if in the middle of changing a value, and can let other threads know that it's finished assigning.
Refs are shared locations that lock themselves in transactions. Instead of making the programmer decide what happens during race conditions for every piece of locked code, we just start up a transaction and let Clojure handle all the lock conditions between the refs in that transaction.
Also, a related concept is the function future. To me, it seems like a future object can be described as a synchronous Agent where the value can't be accessed at all until the calculation is completed. It can also be described as a non-blocking Atom. Are these accurate conceptions of future?
It sounds like you are really getting Clojure! good job :)
Vars have a "root binding" visible in all threads and each individual thread can change the value it sees with out affecting the other threads. If my understanding is correct a var cannot exist in just one thread with out a root binding that is visible to all and it cant be "rebound" until it has been defined with (def ... ) the first time.
Refs are committed at the end of the (dosync ... ) transaction that encloses the changes but only when the transaction was able to finish in a consistent state.
I think your conclusion about Atoms is wrong:
Atoms are like Vars but with thread-sharing safety that blocks until the value has changed
Atoms are changed with swap! or low-level with compare-and-set!. This never blocks anything. swap! works like a transaction with just one ref:
the old value is taken from the atom and stored thread-local
the function is applied to the old value to generate a new value
if this succeeds compare-and-set is called with old and new value; only if the value of the atom has not been changed by any other thread (still equals old value), the new value is written, otherwise the operation restarts at (1) until is succeeds eventually.
I've found two issues with your question.
You say:
If an agent is accessed while an action is occurring then the value isn't returned until the action has finished
http://clojure.org/agents says:
the state of an Agent is always immediately available for reading by any thread
I.e. you never have to wait to get the value of an agent (I assume the value changed by an action is proxied and changed atomically).
The code for the deref-method of an Agent looks like this (SVN revision 1382):
public Object deref() throws Exception{
if(errors != null)
{
throw new Exception("Agent has errors", (Exception) RT.first(errors));
}
return state;
}
No blocking is involved.
Also, I don't understand what you mean (in your Ref section) by
Transactions are committed on calls to deref
Transactions are committed when all actions of the dosync block have been completed, no exceptions have been thrown and nothing has caused the transaction to be retried. I think deref has nothing to do with it, but maybe I misunderstand your point.
Martin is right when he say that Atoms operation restarts at 1. until is succeeds eventually.
It is also called spin waiting.
While it is note really blocking on a lock the thread that did the operation is blocked until the operation succeeds so it is a blocking operation and not an asynchronously operation.
Also about Futures, Clojure 1.1 has added abstractions for promises and futures.
A promise is a synchronization construct that can be used to deliver a value from one thread to another. Until the value has been delivered, any attempt to dereference the promise will block.
(def a-promise (promise))
(deliver a-promise :fred)
Futures represent asynchronous computations. They are a way to get code to run in another thread, and obtain the result.
(def f (future (some-sexp)))
(deref f) ; blocks the thread that derefs f until value is available
Vars don't always have a root binding. It's legal to create a var without a binding using
(def x)
or
(declare x)
Attempting to evaluate x before it has a value will result in
Var user/x is unbound.
[Thrown class java.lang.IllegalStateException]

Is this a safe version of double-checked locking?

Slightly modified version of canonical broken double-checked locking from Wikipedia:
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null) {
// Create new Helper instance and store reference on
// stack so other threads can't see it.
Helper myHelper = new Helper();
// Atomically publish this instance.
atomicSet(helper, myHelper);
}
}
}
return helper;
}
}
Does simply making the publishing of the newly created Helper instance atomic make this double checked locking idiom safe, assuming that the underlying atomic ops library works properly? I realize that in Java, one could just use volatile, but even though the example is in pseudo-Java, this is supposed to be a language-agnostic question.
See also:
Double checked locking Article
It entirely depends on the exact memory model of your platform/language.
My rule of thumb: just don't do it. Lock-free (or reduced lock, in this case) programming is hard and shouldn't be attempted unless you're a threading ninja. You should only even contemplate it when you've got profiling proof that you really need it, and in that case you get the absolute best and most recent book on threading for that particular platform and see if it can help you.
I don't think you can answer the question in a language-agnostic fashion without getting away from code completely. It all depends on how synchronized and atomicSet work in your pseudocode.
The answer is language dependent - it comes down to the guarantees provided by atomicSet().
If the construction of myHelper can be spread out after the atomicSet() then it doesn't matter how the variable is assigned to the shared state.
i.e.
// Create new Helper instance and store reference on
// stack so other threads can't see it.
Helper myHelper = new Helper(); // ALLOCATE MEMORY HERE BUT DON'T INITIALISE
// Atomically publish this instance.
atomicSet(helper, myHelper); // ATOMICALLY POINT UNINITIALISED MEMORY from helper
// other thread gets run at this time and tries to use helper object
// AT THE PROGRAMS LEISURE INITIALISE Helper object.
If this is allowed by the language then the double checking will not work.
Using volatile would not prevent a multiple instantiations - however using the synchronize will prevent multiple instances being created. However with your code it is possible that helper is returned before it has been setup (thread 'A' instantiates it, but before it is setup thread 'B' comes along, helper is non-null and so returns it straight away. To fix that problem, remove the first if (helper == null).
Most likely it is broken, because the problem of a partially constructed object is not addressed.
To all the people worried about a partially constructed object:
As far as I understand, the problem of partially constructed objects is only a problem within constructors. In other words, within a constructor, if an object references itself (including it's subclass) or it's members, then there are possible issues with partial construction. Otherwise, when a constructor returns, the class is fully constructed.
I think you are confusing partial construction with the different problem of how the compiler optimizes the writes. The compiler can choose to A) allocate the memory for the new Helper object, B) write the address to myHelper (the local stack variable), and then C) invoke any constructor initialization. Anytime after point B and before point C, accessing myHelper would be a problem.
It is this compiler optimization of the writes, not partial construction that the cited papers are concerned with. In the original single-check lock solution, optimized writes can allow multiple threads to see the member variable between points B and C. This implementation avoids the write optimization issue by using a local stack variable.
The main scope of the cited papers is to describe the various problems with the double-check lock solution. However, unless the atomicSet method is also synchronizing against the Foo class, this solution is not a double-check lock solution. It is using multiple locks.
I would say this all comes down to the implementation of the atomic assignment function. The function needs to be truly atomic, it needs to guarantee that processor local memory caches are synchronized, and it needs to do all this at a lower cost than simply always synchronizing the getHelper method.
Based on the cited paper, in Java, it is unlikely to meet all these requirements. Also, something that should be very clear from the paper is that Java's memory model changes frequently. It adapts as better understanding of caching, garbage collection, etc. evolve, as well as adapting to changes in the underlying real processor architecture that the VM runs on.
As a rule of thumb, if you optimize your Java code in a way that depends on the underlying implementation, as opposed to the API, you run the risk of having broken code in the next release of the JVM. (Although, sometimes you will have no choice.)
dsimcha:
If your atomicSet method is real, then I would try sending your question to Doug Lea (along with your atomicSet implementation). I have a feeling he's the kind of guy that would answer. I'm guessing that for Java he will tell you that it's cheaper to always synchronize and to look to optimize somewhere else.

Resources