Thread is a single stream of control in the flow of a program. Which programming models it induces, and what are possible advantages and disadvantages of such models? (OpenMP, MPI, PThread, Cuda are these the one that induces threads? what are some pros and cons for each program model? ) Thank you
You can't both initialize a variable and declare it as extern. They are two different things.
When you declare a global variable, you can initialize it:
// trace_logger.c
int inst_cout= 0;
When the global variable may be used by other modules, you name it in a .h file (or define it directly in the other c/cpp source file) that the other modules can include:
// trace_logger.h
extern int int_cout;
And if you need a global variable but don't want to share it with other modules, you declare it as:
// trace_logger.c
static int counter= 0;
Related
I am in a suitation where i have to use static variable for multiple threads. Is there is any alternate methods which can work for the same. Kindly ignores ridiculousness.
Threads are also functions fundamentally.
So static variable will have same behavior in terms of threads as how it works with normal fucntions.
Because static variable stores in a section of process other than the stack segment , any change in static variable in one thread will reflect in another thread.
I've read a lot about thread safety when reading variable simultanously from multiple threads but I am still not sure whether my case is fine or not.
Consider that I have:
const
MySettings: TFormatSettings =
(
CurrencyFormat : 0;
NegCurrFormat : 0;
ThousandSeparator: ' ';
DecimalSeparator : '.';
CurrencyString : 'ยค';
ShortDateFormat : 'MM/dd/yyyy';
LongDateFormat : 'dddd, dd MMMM yyyy';
//All fields of record are initialized.
);
Can I use FormatDateTime('dd/mm/yyyy hh:nn:ss', MySettings, Now) in multiple threads without worries or should I spawn a separate copy of MySettings for each thread?
This scenario is threadsafe if and only if the format settings record is not modified during the simultaneous calls to formatting functions.
Indeed the old school formatting functions that used a shared global format settings record were threadsafe if and only if the shared object was not modified. This is the key point. Is the format settings object modified or not?
My take on all this is that you should avoid modifying format settings objects. Initialise them and then never modify them. That way you never have thread safety issues.
Yes this is perfectly safe.
As long as MySetting is not changed this is the way to use FormatDateTime and other similar procedures.
From documentation, System.SysUtils.TFormatSettings:
A variable of type TFormatSettings defines a thread-safe context that formatting functions can use in place of the default global context, which is not thread-safe.
N.B. You must provide this thread-safe context by programming. It is thread-safe only if you ensure that the parameter and its shares is not changed during execution.
Typically my serializing libraries are using a shared constant format setting variable, which provides a stable read/write interface in all locales.
I am reading the concept of language ch 5 and find that
static's disadvantage : subprograms cannot share same storage.
stack-dynamic's advantage : without recursion, it is not without merit subprograms can share the same memory space for theirs locals.
and I think that due to static bound from begin to terminal, all subprogram should can see it and use
it
like the code I test
#include<iostream>
static int test = 0;
void func1(){ cout << test++ << endl;}
void func2(){ cout << test++ << endl;}
int main(){
func1();
func2();
}
and stack-dynamic allocation each time execute the function, like being push to stack(LIFO), so they
are in different momory space.
I don't know where is the error in my thought?
Thx in advance.
You program runs in a dedicated memory space.
Static variables have global scope.
Given that, I suppose "Subprograms cannot share same storage" means that static variable is instantiated only once and it's the same object during the entire lifetime of the program.
This has several consequences:
If your independent functions need storage as part of their execution, they shouldn't address the same static variables because they will affect the other functions using the same variables.
If your functions can run in parallel (e.g. on several processors) and they address same static variables, these variable become a shared resource, meaning it must be protected against parallel access (which can corrupt the data and result in incorrect behavior).
Dynamically allocated/stack variables use the same memory space in which the program runs, but they can be instantiated many times, and the instances are independent of each other. E.g., if stack variable is defined inside a function, it's allocated upon function entry and released upon function exit. If the function is entered again, new variable will be created. It's same memory space, but different stack frame in this space.
While writing kernel modules/drivers, most of the time some structures are initialized to point to some specific functions. As a beginner in this could someone explain the importance of this.
I saw the struct file_operations while writing the character device driver
Also I found that eventhough the functions are declared they are not implemented always. Could anyone help on that too. For example, in the kernel source: kernel/dma.c, eventhough
static const struct file_operations proc_dma_operations = {
.open = proc_dma_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
are defined, only proc_dma_open is implemented.
The functions seq_read, seq_lseek and single_release are declared in the kernel source file linux-3.1.6/include/linux/seq_file.h and defined in the kernel source file linux-3.1.6/fs/seq_file.c. They are probably common to many file operations.
If you ever played with object-oriented languages like C++, think of file_operations as a base class, and your functions as being implementations of its virtual methods.
Pointers to functions are a very powerful tool in the C language that allows for real-time redirect of function calls. Most if not all operating systems have a similar mechanism, like for example the infamous INT 21 functions 25/35 in the old MS-DOS that allowed TSR programs to exist.
In C, you can assign the pointer to a function to a variable and then call that function through that variable. The function can be changed either at init time based on some parameters or at runtime based on some behavior.
Here is an example:
int fn(int a)
{
...
return a;
}
...
int (*dynamic_fn)(int);
...
dynanic_fn = &fn;
...
int i = dynamic_fn(0);
When the pointer "lives" in a structure that can be passed to system calls, this is a very powerful feature that allows hooks into system functions.
In object oriented languages, the same kind of behavior can be achieved by using reflection to instantiate classes dynamically.
Are there languages that support process common memory in one address space and thread specific memory in another address space using language features rather than through a mechanism like function calls?
process int x;
thread int y;
ThreadStatic attribute in C#
The Visual C++ compiler allows the latter through the nonstandard __declspec(thread) extension - however, it is severly limited, since it isn't supported in dynamically loaded DLLs.
The first is mostly supported through an extern declaration - unless dynamically linked libraries come into play (which is probably the scenario you are looking for).
I am not aware of any environment that makes this as simple as you describe.
C++0x adds the "thread_local" storage specifier, so in namespace (or global) scope your example would be
int x; // normal process-wide global variable
thread_local int y; // per-thread global variable
You can also use thread_local with static when declaring class members or local variables in a function:
class Foo {
static thread_local int x;
};
void f() {
static thread_local int x;
}
Unfortunately, this doesn't appear to be one of the C++0x features supported by Visual Studio 2010 or planned GCC releases.