scope of variable outside for loop - visual-c++

I'm trying to use a program written a few years ago and compiled in a previous version of MS VC++ (I am using VC++ 2008). There are a lot (hundreds) of instances similar to the following:
int main () {
int number = 0;
int number2 = 0;
for (int i = 0; i<10; i++) {
//something using i
}
for (i=0; i<10; i++) {
//something using i
}
return 0;
}
I'm not sure which version it was originally compiled in, but it worked. My question is: how did it work? My understanding is that the i variable should only be defined for use in the first loop. When I try to compile it now I get the error "'i': undeclared identifier" for the line starting the second loop, which makes sense. Was this just overlooked in previous versions of VC++? Thanks!

An earlier version of MSVC had this "misfeature" in that it leaked those variables into the enclosing scope.
In other words, it treated:
for (int i = 0; i<10; i++) {
// something using i
}
the same as:
int i;
for (i = 0; i<10; i++) {
// something using i
}
See the answers to this question I asked about a strange macro definition, for more detail.

Related

How to convert the following Thread statement in go

I am trying to convert the following java statement of threads in go;
int num = 5;
Thread[] threads = new Thread[5];
for (int i = 0; i < num; i++)
{
threads[i] = new Thread(new NewClass(i));
threads[i].start();
}
for (int i = 0; i < numT; i++)
threads[i].join();
I wanted to know, how to convert this to go?
Thanks
Golang uses a concept called "goroutines" (inspired by "coroutines") instead of "threads" used by many other languages.
Your specific example looks like a common use for the sync.WaitGroup type:
wg := sync.WaitGroup{}
for i := 0; i < 5; i++ {
wg.Add(1) // Increment the number of routines to wait for.
go func(x int) { // Start an anonymous function as a goroutine.
defer wg.Done() // Mark this routine as complete when the function returns.
SomeFunction(x)
}(i) // Avoid capturing "i".
}
wg.Wait() // Wait for all routines to complete.
Note that SomeFunction(...) in the example can be work of any sort and will be executed concurrently with all other invocations; however, if it uses goroutines itself then it should be sure to use a similar mechanism as shown here to prevent from returning from the "SomeFunction" until it is actually done with its work.

AE2A dynamic programming

I have been trying to solve this problem: http://www.spoj.com/problems/AE2A/. I know the idea behind this, but I'm getting WA. Can someone help me with this?
my code is: https://ideone.com/rksW1p
for( int i=1; i<=n; i++)
{
for( int j=1; j<=sum; j++)
{
for( int k=1; k<=6 && k<j; k++)
{
A[i][j] += A[i-1][j-k];
}
}
}
Let the numbers on top of the die faces be
x1,x2,x3...
then we have to find the ways in which
x1+x2+x3+...+xn=sum where 1<=xi<=6
Now, the solution of this integral equation is (sum-1)C(n-1).
Hence the probability will be ((sum-1)C(n-1))/(6^n).
The answer will be [((sum-1)C(n-1))/(6^n)x100]
Hope this helps..

Do I need a mutex on a vector of pointers?

Here is a simplified version of my situation:
void AppendToVector(std::vector<int>* my_vector) {
for (int i = 0; i < 100; i++) {
my_vector->push_back(i);
}
}
void CreateVectors(const int num_threads) {
std::vector<std::vector<int>* > my_vector_of_pointers(10);
ThreadPool pool(num_threads);
for (for int i = 0; i < 10; i++) {
my_vector_of_pointers[i] = new std::vector<int>();
pool.AddTask(AppendToVector,
&my_vector_of_pointers[i]);
}
}
My question is whether I need to put a mutex lock in AppendToVector when running this with multiple threads? My intuition tells me I do not have to because there is no possibility of two threads accessing the same data, but I am not fully confident.
Every thread is appending different std::vector (inside AppendToVector function) so there is no need for locking in your case. In case you change your code in the way more than one thread access same vector then you will need lock. Don't be confused because std::vectors you are passing to AppendToVector are them-selfs elements of main std::list, it matters only that here threads are manipulating with completely different (not shared) memory

How to implement barrier using posix semaphores?

How to implement barrier using posix semaphores?
void my_barrier_init(int a){
int i;
bar.number = a;
bar.counter = 0;
bar.arr = (sem_t*) malloc(sizeof(sem_t)*bar.number);
bar.cont = (sem_t*) malloc(sizeof(sem_t)*bar.number);
for(i = 0; i < bar.number; i++){
sem_init(&bar.arr[i], 0, 0);
sem_init(&bar.cont[i], 0, 0); }
}
void my_barrier_wait(){
int i;
bar.counter++;
if(bar.number == bar.counter){
for(i = 0; i < bar.number-1; i++){ sem_wait(&bar.arr[i]); }
for(i = 0; i < bar.number-1; i++){ sem_post(&bar.cont[i]); }
bar.counter = 0;
}else{
sem_post(&bar.arr[pthread_self()-2]);
sem_wait(&bar.cont[pthread_self()-2]);
}
}
When function my_barrier_wait is called, first (N-1) times it would set(+1) for semaphores in array 'arr' and go to sleep(calling sem_wait). N-th time it decrements semaphores in 'arr' and SHOULD (as I expect) wake up [0..bar.number-1] threads posting +1 for semaphores in 'cont' array. It doesn't work like barrier.
You need to look at this (PDF), The Little Book of Semaphores by Allen Downey. Specifically section 3.6.7. It's in Python, but gist of it should be clear enough.

What is proper use of Vala thread pools?

I'm trying to use GLib.ThreadPools in Vala, but after searching Google Code and the existing documentation, I can't find any good examples of their use. My own attempts at using them result in unhandled GLib.ThreadErrors.
For example, consider the following 26 lines, which thread the multiplication of integer ranges.
threaded_multiply.vala
class Range {
public int low;
public int high;
public Range(int low, int high) {
this.low = low;
this.high = high;
}
}
void multiply_range(Range r) {
int product = 1;
for (int i=r.low; i<=r.high; i++)
product = product * i;
print("range(%s, %s) = %s\n",
r.low.to_string(), r.high.to_string(), product.to_string());
}
void main() {
ThreadPool<Range> threads;
threads = new ThreadPool<Range>((Func<Range>)multiply_range, 4, true);
for (int i=1; i<=10; i++)
threads.push(new Range(i, i+5));
}
Compiling them with valac --thread threaded_multipy.vala works fine... but spews warnings at me. Given the dangers of multithreading, this makes me think I'm doing something wrong and might explode in my face eventually.
Does anyone know who to use GLib.ThreadPool correctly? Thanks for reading, and more thanks if you have an answer.
edit: I thought in might be because of my compiling machine, but no, Thread.supported() evaluates to true here.
I don't see anything wrong with your code. And the compiler warnings are about not catching ThreadErrors. Which you probably should do. Just add a try and catch like this:
try {
threads = new ThreadPool<Range>((Func<Range>)multiply_range, 4, true);
for (int i=1; i<=10; i++)
threads.push(new Range(i, i+5));
}
catch(ThreadError e) {
//Error handling
stdout.printf("%s", e.message);
}

Resources