can var change while in loop?(about TCP/IP) - multithreading

I'm curious about TCP/IP(multi threads), one program did change the var and can another program cognize the change of var while in loop?
The reason why I'm curious is I am making a simple game and it has a lobby to wait for another player to come in, but in the same client. I think there is no way of waiting for another player.
I made while(ready = 0) loop and when one client enters the room changing the var(ready) = 1 so I send it to the server, but roommaker client couldn't break the loop....
Here is my code:
while(1){
recv(socket,a,sizeof(a),0);
ready = atoi(ready);
if(ready = 1){
break;
}
}
Why does it happen?

The question (if I understand it correctly) is asking if, when thread A sets a variable to a new value, if thread B will "see" the variable's value change.
The short answer is "possibly, but you shouldn't depend on it, because having threads share read/write access to a value like that is undefined behavior and it often won't behave the way you want it to". If you are going to share a variable between threads, you need to either guard access (both read and write) to that value with a mutex, or (in C++) use a std::atomic variable instead of a regular int/bool/etc datatype.
The reason why sharing a plain old variable doesn't work reliably is because computers are very sophisticated now -- the compiler will play a lot of clever tricks to make your program's executable code more efficient, and also modern CPUs will play even more clever tricks at run time to make things faster still. However, neither the compiler nor the CPU "knows" that your ready variable is intended to be shared across threads; in fact they will both assume that it won't be, in order to perform optimizations -- such as caching the value of the variable in a CPU register so that the thread doesn't have to re-read it from RAM every time through the loop. When that optimization occurs, thread A can change the value of ready, but that change may or may not make it out to RAM in a timely manner, and even if it does, thread B may or may not notice the change in RAM (i.e. if it has already cached that value in a register, it won't).
So, the advice is: don't share a plain variable like that -- it's a race condition and your program likely won't work the way you intended. For multithreaded programs you have to be very careful to synchronize shared variables, either with mutexes or (in C++) by using std::atomic types (which tell the compiler that the variables are intended to be shared across threads, so that it can take additional steps to make sure the correct things happen when they are read or written).

Related

Do I need synchronization to read and write a common cache file in a multithread environment?

Consider the following algorithm, which is running on multiple threads at the same time:
for (i=0; i<10000; i++) {
z = rand(0,50000);
if (isset(cache[z])) results[z] = cache[z];
else {
result = z*100;
cache[z] = result;
results[z] = result;
}
}
The cache and results are both shared variables among the threads. If this algorithm runs as it is, without synchronization, what kind of errors can occur? If two threads try to write concurrently to cache[z] or results[z] can data be lost, or plain and simply the data will be accepted by the thread that won the 'race-condition'?
A more concrete example of a question: let's say Thread A and Thread B both try to write to cache[10] at the same time the number 1000, and in the same time, Thread C tries to read the data that is in cache[10]. Can the read operation of Thread C finish, in an intermitent sate, let's say, as 100, and then Thread C will continue working with the incorrect data?
USE CASE: A real life use case for which I am asking this question, is hashtabled caches. If all of the Threads will use the same hashtable cache, and they will read and write data from and to it, if the data they write to a specific key will always be the same, do I need to synchronize these read and write operations?
Nobody could possibly know. Different languages, compiler, CPUs, platforms, and threading standards could handle this in entirely different ways. There's no way anyone can know what some future compiler, CPU, or platform might do. Unless the documentation or specification for the language or threading standard says what will happen in this case, there is absolutely no way to know what might happen. Of course, if something you're using guarantees particular behavior in this case, then what is guaranteed to happen will happen (unless it's broken).
At one time, there didn't exist any CPUs that buffered writes such that they could be visible out-of-order. But if you wrote code under the assumption that this meant that writes would never become visible out-of-order, that code would be broken on pretty much every modern platform.
This sad tale repeated over and over with numerous compiler optimizations that people never expected compilers to make but that compilers later made. Some of the aliasing fiascos come to mind.
Making decisions that require you to imagine correctly possible future evolutions of computing seems extremely unwise and has failed repeatedly, sometimes catastrophically, in the past.

Using threadsafe initialization in a JRuby gem

Wanting to be sure we're using the correct synchronization (and no more than necessary) when writing threadsafe code in JRuby; specifically, in a Puma instantiated Rails app.
UPDATE: Extensively re-edited this question, to be very clear and use latest code we are implementing. This code uses the atomic gem written by #headius (Charles Nutter) for JRuby, but not sure it is totally necessary, or in which ways it's necessary, for what we're trying to do here.
Here's what we've got, is this overkill (meaning, are we over/uber-engineering this), or perhaps incorrect?
ourgem.rb:
require 'atomic' # gem from #headius
SUPPORTED_SERVICES = %w(serviceABC anotherSvc andSoOnSvc).freeze
module Foo
def self.included(cls)
cls.extend(ClassMethods)
cls.send :__setup
end
module ClassMethods
def get(service_name, method_name, *args)
__cached_client(service_name).send(method_name.to_sym, *args)
# we also capture exceptions here, but leaving those out for brevity
end
private
def __client(service_name)
# obtain and return a client handle for the given service_name
# we definitely want to cache the value returned from this method
# **AND**
# it is a requirement that this method ONLY be called *once PER service_name*.
end
def __cached_client(service_name)
##_clients.value[service_name]
end
def __setup
##_clients = Atomic.new({})
##_clients.update do |current_service|
SUPPORTED_SERVICES.inject(Atomic.new({}).value) do |memo, service_name|
if current_services[service_name]
current_services[service_name]
else
memo.merge({service_name => __client(service_name)})
end
end
end
end
end
end
client.rb:
require 'ourgem'
class GetStuffFromServiceABC
include Foo
def self.get_some_stuff
result = get('serviceABC', 'method_bar', 'arg1', 'arg2', 'arg3')
puts result
end
end
Summary of the above: we have ##_clients (a mutable class variable holding a Hash of clients) which we only want to populate ONCE for all available services, which are keyed on service_name.
Since the hash is in a class variable (and hence threadsafe?), are we guaranteed that the call to __client will not get run more than once per service name (even if Puma is instantiating multiple threads with this class to service all the requests from different users)? If the class variable is threadsafe (in that way), then perhaps the Atomic.new({}) is unnecessary?
Also, should we be using an Atomic.new(ThreadSafe::Hash) instead? Or again, is that not necessary?
If not (meaning: you think we do need the Atomic.news at least, and perhaps also the ThreadSafe::Hash), then why couldn't a second (or third, etc.) thread interrupt between the Atomic.new(nil) and the ##_clients.update do ... meaning the Atomic.news from EACH thread will EACH create two (separate) objects?
Thanks for any thread-safety advice, we don't see any questions on SO that directly address this issue.
Just a friendly piece of advice, before I attempt to tackle the issues you raise here:
This question, and the accompanying code, strongly suggests that you don't (yet) have a solid grasp of the issues involved in writing multi-threaded code. I encourage you to think twice before deciding to write a multi-threaded app for production use. Why do you actually want to use Puma? Is it for performance? Will your app handle many long-running, I/O-bound requests (like uploading/downloading large files) at the same time? Or (like many apps) will it primarily handle short, CPU-bound requests?
If the answer is "short/CPU-bound", then you have little to gain from using Puma. Multiple single-threaded server processes would be better. Memory consumption will be higher, but you will keep your sanity. Writing correct multi-threaded code is devilishly hard, and even experts make mistakes. If your business success, job security, etc. depends on that multi-threaded code working and working right, you are going to cause yourself a lot of unnecessary pain and mental anguish.
That aside, let me try to unravel some of the issues raised in your question. There is so much to say that it's hard to know where to start. You may want to pour yourself a cold or hot beverage of your choice before sitting down to read this treatise:
When you talk about writing "thread-safe" code, you need to be clear about what you mean. In most cases, "thread-safe" code means code which doesn't concurrently modify mutable data in a way which could cause data corruption. (What a mouthful!) That could mean that the code doesn't allow concurrent modification of mutable data at all (using locks), or that it does allow concurrent modification, but makes sure that it doesn't corrupt data (probably using atomic operations and a touch of black magic).
Note that when your threads are only reading data, not modifying it, or when working with shared stateless objects, there is no question of "thread safety".
Another definition of "thread-safe", which probably applies better to your situation, has to do with operations which affect the outside world (basically I/O). You may want some operations to only happen once, or to happen in a specific order. If the code which performs those operations runs on multiple threads, they could happen more times than desired, or in a different order than desired, unless you do something to prevent that.
It appears that your __setup method is only called when ourgem.rb is first loaded. As far as I know, even if multiple threads require the same file at the same time, MRI will only ever let a single thread load the file. I don't know whether JRuby is the same. But in any case, if your source files are being loaded more than once, that is symptomatic of a deeper problem. They should only be loaded once, on a single thread. If your app handles requests on multiple threads, those threads should be started up after the application has loaded, not before. This is the only sane way to do things.
Assuming that everything is sane, ourgem.rb will be loaded using a single thread. That means __setup will only ever be called by a single thread. In that case, there is no question of thread safety at all to worry about (as far as initialization of your "client cache" goes).
Even if __setup was to be called concurrently by multiple threads, your atomic code won't do what you think it does. First of all, you use Atomic.new({}).value. This wraps a Hash in an atomic reference, then unwraps it so you just get back the Hash. It's a no-op. You could just write {} instead.
Second, your Atomic#update call will not prevent the initialization code from running more than once. To understand this, you need to know what Atomic actually does.
Let me pull out the old, tired "increment a shared counter" example. Imagine the following code is running on 2 threads:
i += 1
We all know what can go wrong here. You may end up with the following sequence of events:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A writes its incremented value back to i.
Thread B writes its incremented value back to i.
So we lose an update, right? But what if we store the counter value in an atomic reference, and use Atomic#update? Then it would be like this:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A tries to write its incremented value back to i, and succeeds.
Thread B tries to write its incremented value back to i, and fails, because the value has already changed.
Thread B reads i again and increments it.
Thread B tries to write its incremented value back to i again, and succeeds this time.
Do you get the idea? Atomic never stops 2 threads from running the same code at the same time. What it does do, is force some threads to retry the #update block when necessary, to avoid lost updates.
If your goal is to ensure that your initialization code will only ever run once, using Atomic is a very inappropriate choice. If anything, it could make it run more times, rather than less (due to retries).
So, that is that. But if you're still with me here, I am actually more concerned about whether your "client" objects are themselves thread-safe. Do they have any mutable state? Since you are caching them, it seems that initializing them must be slow. Be that as it may, if you use locks to make them thread-safe, you may not be gaining anything from caching and sharing them between threads. Your "multi-threaded" server may be reduced to what is effectively an unnecessarily complicated, single-threaded server.
If the client objects have no mutable state, good for you. You can be "free and easy" and share them between threads with no problems. If they do have mutable state, but initializing them is slow, then I would recommend caching one object per thread, so they are never shared. Thread[] is your friend there.

Is it required to lock shared variables in perl for read access?

I am using shared variables on perl with use threads::shared.
That variables can we modified only from single thread, all other threads are only 'reading' that variables.
Is it required in the 'reading' threads to lock
{
lock $shared_var;
if ($shared_var > 0) .... ;
}
?
isn't it safe to simple verification without locking (in the 'reading' thread!), like
if ($shared_var > 0) ....
?
Locking is not required to maintain internal integrity when setting or fetching a scalar.
Whether it's needed or not in your particular case depends on the needs of the reader, the other readers and the writers. It rarely makes sense not to lock, but you haven't provided enough details for us to determine what your needs are.
For example, it might not be acceptable to use an old value after the writer has updated the shared variable. For starters, this can lead to a situation where one thread is still using the old value while the another thread is using the new value, a situation that can be undesirable if those two threads interact.
It depends on whether it's meaningful to test the condition just at some point in time or other. The problem however is that in a vast majority of cases, that Boolean test means other things, which might have already changed by the time you're done reading the condition that says it represents a previous state.
Think about it. If it's an insignificant test, then it means little--and you have to question why you are making it. If it's a significant test, then it is telltale of a coherent state that may or may not exist anymore--you won't know for sure, unless you lock it.
A lot of times, say in real-time reporting, you don't really care which snapshot the database hands you, you just want a relatively current one. But, as part of its transaction logic, it keeps a complete picture of how things are prior to a commit. I don't think you're likely to find this in code, where the current state is the current state--and even a state of being in a provisional state is a definite state.
I guess one of the times this can be different is a cyclical access of a queue. If one consumer doesn't get the head record this time around, then one of them will the next time around. You can probably save some processing time, asynchronously accessing the queue counter. But here's a case where it means little in context of just one iteration.
In the case above, you would just want to put some locked-level instructions afterward that expected that the queue might actually be empty even if your test suggested it had data. So, if it is just a preliminary test, you would have to have logic that treated the test as unreliable as it actually is.

Is it ok to create shared variables inside a thread?

I think this might be a fairly easy question.
I found a lot of examples using threads and shared variables but in no example a shared variable was created inside a thread. I want to make sure I don't do something that seems to work and will break some time in the future.
The reason I need this is I have a shared hash that maps keys to array refs. Those refs are created/filled by one thread and read/modified by another (proper synchronization is assumed). In order to store those array refs I have to make them shared too. Otherwise I get the error Invalid value for shared scalar.
Following is an example:
my %hash :shared;
my $t1 = threads->create(
sub { my #ar :shared = (1,2,3); $hash{foo} = \#ar });
$t1->join;
my $t2 = threads->create(
sub { print Dumper(\%hash) });
$t2->join;
This works as expected: The second thread sees the changes the first made. But does this really hold under all circumstances?
Some clarifications (regarding Ian's answer):
I have one thread A reading from a pipe and waiting for input. If there is any, thread A will write this input in a shared hash (it maps scalars to hashes... those are the hashes that need to be declared shared as well) and continues to listen on the pipe. Another thread B gets notified (via cond_wait/cond_signal) when there is something to do, works on the stuff in the shared hash and deletes the appropriate entries upon completion. Meanwhile A can add new stuff to the hash.
So regarding Ian's question
[...] Hence most people create all their shared variables before starting any sub-threads.
Therefore even if shared variables can be created in a thread, how useful would it be?
The shared hash is a dynamically growing and shrinking data structure that represents scheduled work that hasn't yet been worked on. Therefore it makes no sense to create the complete data structure at the start of the program.
Also the program has to be in (at least) two threads because reading from the pipe blocks of course. Furthermore I don't see any way to make this happen without sharing variables.
The reason for a shared variable is to share. Therefore it is likely that you will wish to have more than one thread access the variable.
If you create your shared variable in a sub-thread, how will you stop other threads accessing it before it has been created? Hence most people create all their shared variables before starting any sub-threads.
Therefore even if shared variables can be created in a thread, how useful would it be?
(PS, I don’t know if there is anything in perl that prevents shared variables being created in a thread.)
PS A good design will lead to very few (if any) shared variables
This task seems like a good fit for the core module Thread::Queue. You would create the queue before starting your threads, push items on with the reader, and pop them off with the processing thread. You can use the blocking dequeue method to have the processing thread wait for input, avoiding the need for signals.
I don't feel good answering my own question but I think the answers so far don't really answer it. If something better comes along, I'd be happy to accept that. Eric's answer helped though.
I now think there is no problem with sharing variables inside threads. The reasoning is: Threads::Queue's enqueue() method shares anthing it enqueues. It does so with shared_clone. Since enqueuing should be good from any thread, sharing should too.

Is reading data in one thread while it is written to in another dangerous for the OS?

There is nothing in the way the program uses this data which will cause the program to crash if it reads the old value rather than the new value. It will get the new value at some point.
However, I am wondering if reading and writing at the same time from multiple threads can cause problems for the OS?
I am yet to see them if it does. The program is developed in Linux using pthreads.
I am not interested in being told how to use mutexs/semaphores/locks/etc edit: so my program is only getting the new values, that is not what I'm asking.
No.. the OS should not have any problem. The tipical problem is the that you dont want to read the old values or a value that is half way updated, and thus not valid (and may crash your app or if the next value depends on the former, then you can get a corrupted value and keep generating wrong values all the itme), but if you dont care about that, the OS wont either.
Are the kernel/drivers reading that data for any reason (eg. it contains structures passed in to kernel APIs)? If no, then there isn't any issue with it, since the OS will never ever look at your hot memory.
Your own reads must ensure they are consistent so you don't read half of a value pre-update and half post-update and end up with a value that is neither pre neither post update.
There is no danger for the OS. Only your program's data integrity is at risk.
Imagine you data to consist of a set (structure) of values, which cannot be updated in an atomic operation. The reading thread is bound to read inconsistent data at some point (data consisting of a mixture of old and new values). But you did not want to hear about mutexes...
Problems arise when multiple threads share access to data when accessing that data is not atomic. For example, imagine a struct with 10 interdependent fields. If one thread is writing and one is reading, the reading thread is likely to see a struct that is halfway between one state and another (for example, half of it's members have been set).
If on the other hand the data can be read and written to with a single atomic operation, you will be fine. For example, imagine if there is a global variable that contains a count... One thread is incrementing it on some condition, and another is reading it and taking some action... In this case, there is really no intermediate inconsistent state. It's either got the new value, or it has the old value.
Logically, you can think of locking as a tool that lets you make arbitrary blocks of code atomic, at least as far as the other threads of execution are concerned.

Resources