What is the real difference between threadsafe and atomic? - programming-languages

Does thread safe and atomic have the same meaning in all programming languages? Can they be used interchangeably? If I use a lock (eg: threading.Lock in python) before writing and before reading a variable would this make the operation thread safe as well as atomic? Thank you.

Related

In concurrent programming is it possible that, by using locks, a program might sometimes use more processors than are necessary?

This is an exam question (practice exam, not the real one). It's about concurrent programming using a multi-core processor and the problems with using locks.
"In concurrent programming is it possible that, by using locks, a program might sometimes use more processors than are necessary?"
In other words, is this ever possible? It's a true/false question. I can't find an answer anywhere and I'm revising for my exams.
The concurrent program with N threads of execution using locks at any point in time can have M=0 .. N-1 threads waiting for locks; thus this program can only be utilizing N-M processors since waiting for a lock does not require a processor.
Thus, no, using locks does not increase the number of processors required by a concurrent program.
With an efficient implementation of multi-threading and locks, if a thread blocks waiting for a lock for any significant time, the scheduler / lock implementation will reassign the core to do something else.
But since the exam question is asking if it is ever possible to use more processors than are strictly necessary, the answer is that it depends on the implementation of threads / locks / scheduling. For instance, there is a kind of lock called a spinlock where the lock implementation does NOT surrender control of the processor while waiting to acquire a lock. Instead, it polls the lock in a tight loop trying to acquire it.
Why would you do that? Well, if the lock is likely to become available in a short enough period of time, then the CPU wasted "spinning" on the lock is less than would be spent on performing a full context switch.
So I don't think your exam question has a simple yes / no answer.

What does it mean that Python3 asyncio's Queue is not thread safe

I find it quite confusing about the how thread-safty defined in python language.
Someone said it is thread safe by implementation of CPython.
Asyncio's Queue, on the other hand, said it is not Thread safe.
It seems like they mean different things when they talk about thread-safe. What is it really?
asyncio's Queue is not thread safe
Someone said it is thread safe by implementation of CPython.
No, in link you provided said that "Python's built-in structures" are thread-safe. It means that avaliable without imports data types (like int, list, dict, etc) are thread-safe.
It doesn't mean that every object in Python standard library is thread-safe.

Is lua table thread safe?

Say, I have a lua table t={"a"={}, "b"={}}.
My question is, I have two threads, thread A, thread B.
And if I create two lua_State individually for these two threads through lua_newthread, and thread A only read/write t.a, thread B only read/write t.b.
Should I use lua_lock in each thread above?
If the answer is YES, then, does any of operation on t need a lua_lock?
TL;DR: A Lua engine-state is not thread-safe, so there's no reason to make the Lua table thread-safe.
A lua_State is not the engine state, though it references it. Instead, it is the state of a Lua thread, which is no relation to an application thread. Lua threads with the same engine state cannot execute concurrently, as the Lua engine is inherently single-threaded (you may use an arbitrary number of engines at the same time though), instead they are cooperatively multitasked.
So lua_State *lua_newthread (lua_State *L); creates a new Lua thread, not an OS thread.
lua_lock and the like do not refer to thread safety, but were the way native code can keep hold of Lua objects across calls into the engine in version 2 of the implementation: https://www.lua.org/manual/2.1/subsection3_5_6.html
The modern way is using the registry, a Lua table accessible from native code: http://www.lua.org/manual/5.3/manual.html#4.5
lua table isn't thread safe, but there's no need to lock since threads don't read/write on the same element.
NO, lua table is not thread safe.
And yes, all operation on table t will need lua_lock, because none of them is atomic operation.

Lua operations, that works in mutitheaded environment

My application uses Lua in multithreaded environment with global mutex. It implemented like this:
Thread locks mutex,
Call lua_newthread
Perform some initialization on coroutine
Run lua_resume on coroutine
Unlocks mutex
lua_lock/unlock is not implemented, GC is stopped, when lua works with coroutine.
My question is, can I perform steps 2 and 3 without locking, if initialisation process does not requires any global Lua structs? Can i perform all this process without locking at all, if coroutine does not requires globals too?
In what case I generally can use Lua functions without locking?
Lua does not guarantee thread safety if you're trying to use single Lua state in separate OS threads without lua_lock/unlock. If you want to use multithreaded environment you need to use individual state for each OS thread.
Look at some multithreading solutions, e.g. https://github.com/effil/effil.
In what case I generally can use Lua functions without locking?
On the same Lua state (or threads derived from the same source Lua state)?
None.
Lua is thread-safe in the sense that separate Lua state instances can be executed in parallel. There are absolutely no thread safety guarantees when you call any Lua API function from two different threads on the same Lua state instance.
You cannot do any of the steps 2, 3, or 4 outside of some synchronization mechanism to prevent concurrent access to the same state. It doesn't matter if it's just creating a new thread (which allocates memory) or some "initialization process" (which will likely allocate memory). Even things that don't allocate memory are still not allowed.
Lua offers no guarantees about thread-safety within a Lua state.

How to define threadsafe?

Threadsafe is a term that is thrown around documentation, however there is seldom an explanation of what it means, especially in a language that is understandable to someone learning threading for the first time.
So how do you explain Threadsafe code to someone new to threading?
My ideas for options are the moment are:
Do you use a list of what makes code
thread safe vs. thread unsafe
The book definition
A useful metaphor
Multithreading leads to non-deterministic execution - You don't know exactly when a certain piece of parallel code is run.
Given that, this wonderful multithreading tutorial defines thread safety like this:
Thread-safe code is code which has no indeterminacy in the face of any multithreading scenario. Thread-safety is achieved primarily with locking, and by reducing the possibilities for interaction between threads.
This means no matter how the threads are run in particular, the behaviour is always well-defined (and therefore free from race conditions).
Eric Lippert says:
When I'm asked "is this code thread safe?" I always have to push back and ask "what are the exact threading scenarios you are concerned about?" and "exactly what is correct behaviour of the object in every one of those scenarios?".
It is unhelpful to say that code is "thread safe" without somehow communicating what undesirable behaviors the utilized thread safety mechanisms do and do not prevent.
G'day,
A good place to start is to have a read of the POSIX paper on thread safety.
Edit: Just the first few paragraphs give you a quick overview of thread safety and re-entrant code.
HTH
cheers,
i maybe wrong but one of the criteria for being thread safe is to use local variables only. Using global variables can have undefined result if the same function is called from different threads.
A thread safe function / object (hereafter referred to as an object) is an object which is designed to support multiple concurrent calls. This can be achieved by serialization of the parallel requests or some sort of support for intertwined calls.
Essentially, if the object safely supports concurrent requests (from multiple threads), it is thread safe. If it is not thread safe, multiple concurrent calls could corrupt its state.
Consider a log book in a hotel. If a person is writing in the book and another person comes along and starts to concurrently write his message, the end result will be a mix of both messages. This can also be demonstrated by several threads writing to an output stream.
I would say to understand thread safe, start with understanding difference between thread safe function and reentrant function.
Please check The difference between thread-safety and re-entrancy for details.
Tread-safe code is code that won't fail because the same data was changed in two places at once. Thread safe is a smaller concept than concurrency-safe, because it presumes that it was in fact two threads of the same program, rather than (say) hardware modifying data, or the OS.
A particularly valuable aspect of the term is that it lies on a spectrum of concurrent behavior, where thread safe is the strongest, interrupt safe is a weaker constraint than thread safe, and reentrant even weaker.
In the case of thread safe, this means that the code in question conforms to a consistent api and makes use of resources such that other code in a different thread (such as another, concurrent instance of itself) will not cause an inconsistency, so long as it also conforms to the same use pattern. the use pattern MUST be specified for any reasonable expectation of thread safety to be had.
The interrupt safe constraint doesn't normally appear in modern userland code, because the operating system does a pretty good job of hiding this, however, in kernel mode this is pretty important. This means that the code will complete successfully, even if an interrupt is triggered during its execution.
The last one, reentrant, is almost guaranteed with all modern languages, in and out of userland, and it just means that a section of code may be entered more than once, even if execution has not yet preceeded out of the code section in older cases. This can happen in the case of recursive function calls, for instance. It's very easy to violate the language provided reentrancy by accessing a shared global state variable in the non-reentrant code.

Resources