why is my program code not running goto statement infinitely in c#? - c#-4.0

class Program
{
static void Main()
{
int i ;
for ( i=0; i < 10;i++ )
{
// p: Console.WriteLine("hello");
p: if(i%2!=0)
{
if(i==5)
{
goto p;
}
}
Console.WriteLine(i);
}
Console.ReadKey();
}
}
//output : 0 1 2 3 4

The goto is being executed. Unfortunately, nothing has changed in the variables so that execution takes the exact same path as before: It reaches the goto again. This is an infinite loop.
Note, that in particular i has the same value before and after the jump. Maybe this is your misunderstanding.
You should learn to use the debugger. You can see this when single-stepping through the program.

Related

Calling a Go callback from a threaded low-level C/C++ code layer in a Go application

I have a Go application and some C API function (e.g. some Win32 API function) that works asynchronously and spawns worker threads. This API function calls callbacks from some of those worker threads. The threads are system ones and are created internally by the C code (not by Go). Now, I want to pass Go functions as callbacks to that C API function. So, the Go callback functions would be called by the C function in the context of the worker threads, not known to the Go application.
We can assume that the safety measures have been taken and all the data access in callbacks is properly guarded by mutexes in order not to interfere with the main Go code.
The question is "Does Go support such scenario?", i.e. would callbacks work properly or something can easily crash inside because the Go runtime is not designed for what I'd like to do?
I have conducted the experiment of making a Go callback, called from 20 native Windows threads in parallel. The callback increments a variable, adds elements to the map and prints the value on the screen. Everything works smoothly, so I assume that there would be no problems in more complex scenarios as well.
Here's the source code of my tests for others to use:
proxy.h
#ifndef _PROXY_H_
#define _PROXY_H_
long threaded_c_func(long param);
#endif
proxy.c
#include "proxy.h"
#ifdef WIN32
#include <Windows.h>
#endif
#define ROUNDS 20
volatile long passed = 0;
extern long long threadedCallback(long cbidx);
DWORD WINAPI ThreadFunc(LPVOID param) {
threadedCallback(*((long *)param));
InterlockedIncrement(&passed);
}
long threaded_c_func(long cbidx) {
for (int i = 0; i < ROUNDS; i++)
{
DWORD ThreadId = 0;
CreateThread(NULL, 1024*1024, &ThreadFunc, (LPVOID) &cbidx, 0, &ThreadId);
}
while (passed < ROUNDS)
{
Sleep(100);
}
return ROUNDS;
}
callbackTest.go
package main
/*
#cgo CFLAGS: -I .
#cgo LDFLAGS: -L .
#include "proxy.h"
long threaded_c_func(long param);
*/
import "C"
import (
"fmt"
"strconv"
"sync"
)
var hashTable map[int32]string
var count int32
var mtx sync.Mutex
//export threadedCallback
func threadedCallback(cbidx int) C.longlong {
mtx.Lock()
defer mtx.Unlock()
count++
hashTable[count] = strconv.Itoa(int(count))
fmt.Println("Current counter ", count)
return C.longlong(count)
}
func main() {
hashTable = make(map[int32]string)
var expected C.long
expected = C.threaded_c_func(1)
if int32(expected) == count {
fmt.Println("Counters match")
} else {
fmt.Println("Expected ", int32(expected), " got ", count)
}
for k, v := range hashTable {
if strconv.Itoa(int(k)) == v {
fmt.Println(v, " match")
} else {
fmt.Println(v, "don't match")
}
}
}

Worker thread suspend / resume implementation

In my attempt to add suspend / resume functionality to my Worker [thread] class, I've happened upon an issue that I cannot explain. (C++1y / VS2015)
The issue looks like a deadlock, however I cannot seem to reproduce it once a debugger is attached and a breakpoint is set before a certain point (see #1) - so it looks like it's a timing issue.
The fix that I could find (#2) doesn't make a lot of sense to me because it requires to hold on to a mutex longer and where client code might attempt to acquire other mutexes, which I understand to actually increase the chance of a deadlock.
But it does fix the issue.
The Worker loop:
Job* job;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_jobsMutex);
m_workSemaphore.Wait(lock);
if (m_jobs.empty() && m_finishing)
{
break;
}
// Take the next job
ASSERT(!m_jobs.empty());
job = m_jobs.front();
m_jobs.pop_front();
}
bool done = false;
bool wasSuspended = false;
do
{
// #2
{ // Removing this extra scoping seemingly fixes the issue BUT
// incurs us holding on to m_suspendMutex while the job is Process()ing,
// which might 1, be lengthy, 2, acquire other locks.
std::unique_lock<std::mutex> lock(m_suspendMutex);
if (m_isSuspended && !wasSuspended)
{
job->Suspend();
}
wasSuspended = m_isSuspended;
m_suspendCv.wait(lock, [this] {
return !m_isSuspended;
});
if (wasSuspended && !m_isSuspended)
{
job->Resume();
}
wasSuspended = m_isSuspended;
}
done = job->Process();
}
while (!done);
}
Suspend / Resume is just:
void Worker::Suspend()
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(!m_isSuspended);
m_isSuspended = true;
}
void Worker::Resume()
{
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(m_isSuspended);
m_isSuspended = false;
}
m_suspendCv.notify_one(); // notify_all() doesn't work either.
}
The (Visual Studio) test:
struct Job: Worker::Job
{
int durationMs = 25;
int chunks = 40;
int executed = 0;
bool Process()
{
auto now = std::chrono::system_clock::now();
auto until = now + std::chrono::milliseconds(durationMs);
while (std::chrono::system_clock::now() < until)
{ /* busy, busy */
}
++executed;
return executed < chunks;
}
void Suspend() { /* nothing here */ }
void Resume() { /* nothing here */ }
};
auto worker = std::make_unique<Worker>();
Job j;
worker->Enqueue(j);
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs)); // Wait at least one chunk.
worker->Suspend();
Assert::IsTrue(j.executed < j.chunks); // We've suspended before we finished.
const int testExec = j.executed;
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs * 4));
Assert::IsTrue(j.executed == testExec); // We haven't moved on.
// #1
worker->Resume(); // Breaking before this call means that I won't see the issue.
worker->Finalize();
Assert::IsTrue(j.executed == j.chunks); // Now we've finished.
What am I missing / doing wrong? Why does the Process()ing of the job have to be guarded by the suspend mutex?
EDIT: Resume() should not have been holding on to the mutex at the time of notification; that's fixed -- the issue persists.
Of course the Process()ing of the job does not have to be guarded by the suspend mutex.
The access of j.executed - for the asserts as well as for the incrementing - however does need to be synchronized (either by making it an std::atomic<int> or by guarding it with a mutex etc.).
It's still not clear why the issue manifested the way it did (since I'm not writing to the variable on the main thread) -- might be a case of undefined behaviour propagating backwards in time.

How to search through next available thread to do computation

I am doing multithreading in C++. This may be something very standard but I can't seem to find it anywhere or know any key terms to search for it online.
I want to do some sort of computation many times but with multiple threads. For each iteration of computation, I want to find the next available thread that has finished its previous computation to do the next iteration. I don't want to cycle through the threads in order since the next thread to be called may not have finished its work yet.
E.g.
Suppose I have a vector of int and I want to sum up the total with 5 threads. I have the to-be-updated total sum stored somewhere and the count for which element I am currently up to. Each thread looks at the count to see the next position and then takes that vector value and adds it to the total sum so far. Then it goes back to look for the count to do the next iteration. So for each iteration, the count increments then looks for the next available thread (maybe one already waiting for count; or maybe they are all busy still working) to do the next iteration. We do not increase the number of threads but I want to be able to somehow search through all the 5 threads for the first one that finish to do the next computation.
How would I go about coding this. Every way I know of involves doing a loop through the threads such that I can't check for the next available one which may be out of order.
Use semafor (or mutex, always mix up those two) on a global variable telling you what is next. The semafor will lock the other threads out as long as you access the variable making that threads access clear.
So, assuming you have an Array of X elements. And a global called nextfree witch is initalized to 0, then a psudo code would look like this:
while (1)
{
<lock semafor INT>
if (nextfree>=X)
{
<release semnafor INT>
<exit and terminate thread>
}
<Get the data based on "nextfree">
nextfree++;
<release semafor INT>
<do your stuff withe the chunk you got>
}
The point here is that each thread will be alone and have exlusive access to the data struct within the semafor lock and therefore can access the next available regardless of what the others doing. (The other threads will have to wait in line if they are done while another thread working on getting next data chunk. When you release only ONE that stands in queue will get access. The rest will have to wait.)
There are some things to be ware of. Semafor's might lock your system if you manage to exit in the wrong position (Withour releasing it) or create a deadlock.
This is a thread pool:
template<class T>
struct threaded_queue {
using lock = std::unique_lock<std::mutex>;
void push_back( T t ) {
{
lock l(m);
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop_front() {
lock l(m);
cv.wait(l, [this]{ return abort || !data.empty(); } );
if (abort) return {};
auto r = std::move(data.back());
data.pop_back();
return std::move(r);
}
void terminate() {
{
lock l(m);
abort = true;
data.clear();
}
cv.notify_all();
}
~threaded_queue()
{
terminate();
}
private:
std::mutex m;
std::deque<T> data;
std::condition_variable cv;
bool abort = false;
};
struct thread_pool {
thread_pool( std::size_t n = 1 ) { start_thread(n); }
thread_pool( thread_pool&& ) = delete;
thread_pool& operator=( thread_pool&& ) = delete;
~thread_pool() = default; // or `{ terminate(); }` if you want to abandon some tasks
template<class F, class R=std::result_of_t<F&()>>
std::future<R> queue_task( F task ) {
std::packaged_task<R()> p(std::move(task));
auto r = p.get_future();
tasks.push_back( std::move(p) );
return r;
}
template<class F, class R=std::result_of_t<F&()>>
std::future<R> run_task( F task ) {
if (threads_active() >= total_threads()) {
start_thread();
}
return queue_task( std::move(task) );
}
void terminate() {
tasks.terminate();
}
std::size_t threads_active() const {
return active;
}
std::size_t total_threads() const {
return threads.size();
}
void clear_threads() {
terminate();
threads.clear();
}
void start_thread( std::size_t n = 1 ) {
while(n-->0) {
threads.push_back(
std::async( std::launch::async,
[this]{
while(auto task = tasks.pop_front()) {
++active;
try{
(*task)();
} catch(...) {
--active;
throw;
}
--active;
}
}
)
);
}
}
private:
std::vector<std::future<void>> threads;
threaded_queue<std::packaged_task<void()>> tasks;
std::atomic<std::size_t> active;
};
You give it how many threads either at construction or via start_thread.
You then queue_task. This returns a std::future that tells you when the task is completed.
As threads finish a task, they go to the threaded_queue and look for more.
When a threaded_queue is destroyed, it aborts all data in it.
When a thread_pool is destroyed, it aborts all future tasks, then waits for all of the outstanding tasks to finish.
Live example.

How to test if a critical section is locked, without entering it? Or, how to wait until a critsec is owned by another thread?

I am working on forcing certain deadlock scenarios to reproduce consistently, for dev purposes. In doing so it would be helpful to be able for a thread to wait until a critical section is locked by another thread, then forcing it to block.
So, I want something like:
void TryToWaitForBlock(CriticalSection& cs, DWORD ms)
{
// wait until this CS is blocked, then return
}
...
void someFunction()
{
// ...
TryToWaitForBlock(cs, 5000);// this will give much more time for the crit sec to block by other threads, increasing the chance that the next call will block.
EnterCriticalSection(cs);// normally this /very/ rarely blocks. When it does, it deadlocks.
// ...
}
TryEnterCriticalSection would be perfect, but because it will actually enter the critical section, it is not usable. Is there a similar function that will do the test, but NOT also try to enter it?
bool TryToWaitForBlock( CRITICAL_SECTION& cs, DWORD ms )
{
LARGE_INTEGER freq;
QueryPerformanceFrequency( &freq );
LARGE_INTEGER now;
QueryPerformanceCounter( &now );
LARGE_INTEGER waitTill;
waitTill.QuadPart = static_cast<LONGLONG>(now.QuadPart + freq.QuadPart * (ms / 1000.0));
while( now.QuadPart < waitTill.QuadPart ) {
if( NULL != static_cast<volatile HANDLE&>(cs.OwningThread) ) {
return true;
}
QueryPerformanceCounter( &now );
}
return false;
}

Timing delay on my PIC

Prepare for a nooby question.
I'm writing some ultra-simple code for this new PIC which I've just got. All I'm trying to do is to flash an LED. Below are two code samples - the first works but the second doesn't. Why?? I can't see any problem with the second one.
WORKS:
while(1)
{
i=99999;
while(i--) {
LATAbits.LATA0 = 0; // set RA0 to logic 1
}
i=99999;
while(i--) {
LATAbits.LATA0 = 1; // set RA0 to logic 0
}
}
DOESN'T WORK:
while(1)
{
LATAbits.LATA0 = 1; // set RA0 to logic 1
for(i=0;i<99999;i++) {}
LATAbits.LATA0 = 0; // set RA0 to logic 0
for(i=0;i<99999;i++) {}
}
Thanks in advance for the help!
Try this:
while(1)
{
for(i=0;i<99999;i++)
{
LATAbits.LATA0 = 1; // set RA0 to logic 1
}
for(i=0;i<99999;i++)
{
LATAbits.LATA0 = 0; // set RA0 to logic 0
}
}
Maybe your compiler is optimizing and ignoring the for statements as they have no code to execute inside. What I did was put the RA0 assignment inside these statements, forcing the compiler to keep the delay loops.
You can also pass the argument -S to your compiler to see the assembly code generated and confirm that the for statements were removed. The -S option generates an intermediate file with a ".S" extension.
What's your definition of i? char i? int i? long i?
What PIC are you using?
If you're using an int, the 8-bit PICs use a 16-bit int.
So what happens:
// Try to stuff 99999 into 16-bits, but it becomes 34463
i=99999;
// Count-down from 34463 to zero
while(i--) {}
// Successfully exit delay
As opposed to:
// Count up from zero
// Get to 65535 and then reset to zero
// Never reach 99999
for(i=0;i<99999;i++) {}
// Never exit delay
Try using unsigned long int for i, which tends to be 32-bit on PICs and see if it starts to work.

Resources