M5Atom with mutex is down - multithreading

M5Atom Lite with the following code is down after booting.
The M5 continues rebooting after down.
This code is simple test for multi-task and mutex.
#include <Arduino.h>
SemaphoreHandle_t xMutex = NULL;
volatile int a = 0;
void subprocess(void *pvParameters)
{
for (;;)
{
if (xSemaphoreTake(xMutex, (portTickType)1000) == pdTRUE)
{
Serial.println(2 * a);
a++;
xSemaphoreGive(xMutex);
}
delay(1000);
}
}
void setup()
{
// put your setup code here, to run once:
Serial.begin(115200);
// Begin task with Core 0
xTaskCreatePinnedToCore(subprocess, "subprocess", 4096, NULL, 1, NULL, 0);
}
void loop()
{
if (xSemaphoreTake(xMutex, (portTickType)1000) == pdTRUE)
{
Serial.println(2 * a + 1);
a++;
xSemaphoreGive(xMutex);
}
delay(1000);
}
M5Atom Lite is a module with ESP32-PICO-D4, Dual Core MCU.
I have the following message via serial monitor.
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 188777542, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0018,len:4
load:0x3fff001c,len:1044
load:0x40078000,len:10124
load:0x40080400,len:5828
entry 0x400806a8
/home/runne/rh/owmoer/kr/uensnpe3r2/-waorrdku/iensop-3l2i-ba-rbduuiilndoe-rl/iebs-pb3u2i-ladredru/iensop-3l2i-ba-rbduuiilndoe-rl/ib--uil/co/eoneidf/frmpotes/s/fueec:14s/ (eQe.ce1en2 (xQeeuive)- assertefvi)-d!
ebrt f) was c
eled a) PC 0xalled 51 oC 0xre 1
ELF file SHA256: 0000000000000000
Backtrace: 0x40084ea0:0x3ffb1f10 0x40085115:0x3ffb1f30 0x40085e51:0x3ffb1f50 0x400d0cc8:0x3ffb1f90 0x400d1f79:0x3ffb1fb0 0x40086125:0x3ffb1fd0

You've not created the semaphore that you're trying to use. You need to create it first in setup(). For example, assuming you want a binary semaphore, created by xSemaphoreCreateBinary():
void setup()
{
// put your setup code here, to run once:
Serial.begin(115200);
xMutex = xSemaphoreCreateBinary();
// Begin task with Core 0
xTaskCreatePinnedToCore(subprocess, "subprocess", 4096, NULL, 1, NULL, 0);
}
Error handling omitted for simplicity.

Related

How does generating an interrupt (calling irqfd) calls an interrupt on the KVM VM?

The KVM irqfd ioctl starts the irqfd for a file descriptor.
It does this:
case KVM_IRQFD: {
struct kvm_irqfd data;
r = -EFAULT;
if (copy_from_user(&data, argp, sizeof(data)))
goto out;
r = kvm_irqfd(kvm, &data);
break;
}
where kvm_irqfd is here
and calls kvm_irqfd_assign which initiates a wakeup queue:
init_waitqueue_func_entry(&irqfd->wait, irqfd_wakeup);
That is, irqfd_wakeup does this:
if (flags & EPOLLIN) {
u64 cnt;
eventfd_ctx_do_read(irqfd->eventfd, &cnt);
idx = srcu_read_lock(&kvm->irq_srcu);
do {
seq = read_seqcount_begin(&irqfd->irq_entry_sc);
irq = irqfd->irq_entry;
} while (read_seqcount_retry(&irqfd->irq_entry_sc, seq));
/* An event has been signaled, inject an interrupt */
if (kvm_arch_set_irq_inatomic(&irq, kvm,
KVM_USERSPACE_IRQ_SOURCE_ID, 1,
false) == -EWOULDBLOCK)
schedule_work(&irqfd->inject);
srcu_read_unlock(&kvm->irq_srcu, idx);
ret = 1;
}
As you can see in schedule_work(&irqfd->inject), it schedules the inject function, which is here:
static void
irqfd_inject(struct work_struct *work)
{
struct kvm_kernel_irqfd *irqfd =
container_of(work, struct kvm_kernel_irqfd, inject);
struct kvm *kvm = irqfd->kvm;
if (!irqfd->resampler) {
kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID, irqfd->gsi, 1,
false);
kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID, irqfd->gsi, 0,
false);
} else
kvm_set_irq(kvm, KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID,
irqfd->gsi, 1, false);
}
It calls kvm_set_irq defined here which does this:
int kvm_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level,
bool line_status)
{
struct kvm_kernel_irq_routing_entry irq_set[KVM_NR_IRQCHIPS];
int ret = -1, i, idx;
trace_kvm_set_irq(irq, level, irq_source_id);
/* Not possible to detect if the guest uses the PIC or the
* IOAPIC. So set the bit in both. The guest will ignore
* writes to the unused one.
*/
idx = srcu_read_lock(&kvm->irq_srcu);
i = kvm_irq_map_gsi(kvm, irq_set, irq);
srcu_read_unlock(&kvm->irq_srcu, idx);
while (i--) {
int r;
r = irq_set[i].set(&irq_set[i], kvm, irq_source_id, level,
line_status);
if (r < 0)
continue;
ret = r + ((ret < 0) ? 0 : ret);
}
return ret;
}
It looks like it finally calls something at:
r = irq_set[i].set(&irq_set[i], kvm, irq_source_id, level,
line_status);
This set function is filled by this.
It sets to this function:
static int vgic_irqfd_set_irq(struct kvm_kernel_irq_routing_entry *e,
struct kvm *kvm, int irq_source_id,
int level, bool line_status)
{
unsigned int spi_id = e->irqchip.pin + VGIC_NR_PRIVATE_IRQS;
if (!vgic_valid_spi(kvm, spi_id))
return -EINVAL;
return kvm_vgic_inject_irq(kvm, 0, spi_id, level, NULL);
}
which calls kvm_vgic_inject_irq which finally calls vgic_put_irq which calls this:
void __vgic_put_lpi_locked(struct kvm *kvm, struct vgic_irq *irq)
{
struct vgic_dist *dist = &kvm->arch.vgic;
if (!kref_put(&irq->refcount, vgic_irq_release))
return;
list_del(&irq->lpi_list);
dist->lpi_list_count--;
kfree(irq);
}
but I don't see how the GIC is called here, I only see the list being deleted.
I thought here it would send the interrupt to the GIC, which would then call the VM somehow.
I'm trying to understand how calling the irqfd file descriptor ends up calling an interrupt in the VM.
VGIC is for arm, you should check arm support file. While x86 is using either APIC or PIC. mostly APIC now.
You can check the specification of how those IRQ chip works to transfer the external signal to the destination core(vcpu).
For example, if you were using a x86 virtual machine(I have no idea of VGIC) which is using IOAPIC, there are 24 pins for example(emulated), and you should understand the APIC(hardware), then you know how it works.
https://elixir.bootlin.com/linux/v5.2.12/source/arch/x86/kvm/irq_comm.c#L271
https://elixir.bootlin.com/linux/v5.2.12/source/arch/x86/kvm/irq_comm.c#L38

'PTX JIT compilation failed' from cuModuleLoadData

Below is the code:
#define FILENAME "kernel.code"
#define kernel_name "hello_world"
#define THREADS 4
std::vector<char> load_file()
{
std::ifstream file(FILENAME, std::ios::binary | std::ios::ate);
std::streamsize fsize = file.tellg();
file.seekg(0, std::ios::beg);
std::vector<char> buffer(fsize);
if (!file.read(buffer.data(), fsize)) {
failed("could not open code object '%s'\n", FILENAME);
}
return buffer;
}
struct joinable_thread : std::thread
{
template <class... Xs>
joinable_thread(Xs&&... xs) : std::thread(std::forward<Xs>(xs)...) // NOLINT
{
}
joinable_thread& operator=(joinable_thread&& other) = default;
joinable_thread(joinable_thread&& other) = default;
~joinable_thread()
{
if(this->joinable())
this->join();
}
};
void run(const std::vector<char>& buffer) {
CUdevice device;
CUDACHECK(cuDeviceGet(&device, 0));
CUcontext context;
CUDACHECK(cuCtxCreate(&context, 0, device));
CUmodule Module;
CUDACHECK(cuModuleLoadData(&Module, &buffer[0]));
...
}
void run_multi_threads(uint32_t n) {
{
auto buffer = load_file();
std::vector<joinable_thread> threads;
for (uint32_t i = 0; i < n; i++) {
threads.emplace_back(std::thread{[&, i, buffer] {
run(buffer);
}});
}
}
}
int main() {
CUDACHECK(cuInit(0));
run_multi_threads(THREADS);
}
And the code kernel.cu used for ptx is as follows:
#include "cuda_runtime.h"
extern "C" __global__ void hello_world(float* a, float* b) {
int tx = threadIdx.x;
b[tx] = a[tx];
}
I m generating the ptx in this way
nvcc --ptx kernel.cu -o kernel.code
Im using a machine with GeForce GTX TITAN X.
And Im facing this "PTX JIT compilation failed" from cuModuleLoadData error, only when I m trying to use this with multiple threads. If i remove the multi-threading part and run normally, this error doesn't occur.
Can anyone please tell me what is going wrong and how to overcome this.
As mentioned in the comments, I was able to get it to work by moving the load_file() call to the main, so that the buffer read from the file is valid, and then pass only the buffer to all the threads.
Actually in the original code, the buffer will be deconstructed once it leaves the '{...}' scope. So when thread starts, you may read the invalid buffer.
If you put your buffer in the main, it will not be deconstructed or freed until the program exits.
So yes, it's because you pass the invalid buffer (which may have already been freed) to the cu code.

CreateThread doesn't work in my code

I wonder how to solve this problem.
I notice that CreateThread() doesn't work well in this code:
DWORD threadFunc1(LPVOID lParam)
{
int cur = (int)lParam
while(1)
{
//Job1...
//Reason
break;
}
Start(cur + 1);
Sleep(100);
}
void Start(int startNum)
{
....
CreateThread(NULL, NULL, (LPTHREAD_START_ROUTINE)threadFunc1, &startNum, 0, &dwThreadId);
...
}
void btnClicking()
{
Start(0);
}
In this code, there is a thread create by Start() and it calls Start() when the thread ends.
The second created thread does not work. I think the first thread disappeared and the second thread is destroyed.
What is the best way to solve this?
OS: Win 7 64bit Ultimate.
Tool: Visual Studio 2008.
It does not work because your code has bugs in it. The signature of your thread function is wrong, and you are passing the startNum value to the thread in the wrong way.
Try this instead:
DWORD WINAPI threadFunc1(LPVOID lParameter)
{
int cur = (int) lParameter;
while(1)
{
//Job1...
//Reason
break;
}
Start(cur + 1);
Sleep(100);
}
void Start(int startNum)
{
....
HANDLE hThread = CreateThread(NULL, NULL, &threadFunc1, (LPVOID) startNum, 0, &dwThreadId);
if (hThread != NULL)
{
// ... store it somewhere or close it, otherwise you are leaking it...
}
...
}
void btnClicking()
{
Start(0);
}

QThread crashes the program?

I implement QThread like this, but get program crashed when it runs.
I've searched and seen posts saying it is not the correct way to use QThread.
But I cannot find any reason for the crashes of my program, what I do is only
triggering 'on_Create_triggered()' and I guarantee the mutex is locked and unlocked properly.
I have tested the program for two days(testing only by 'std::cerr << ...;' prints results), but still cannot find reason. What I guess is that the thread may wait for the lock too long and cause program to crash. (not sounds reasonable...) :)
My codes:
Background.h
class Background : public QThread
{
Q_OBJECT
public:
Background(int& val,DEVMAP& map, QQueue<LogInfoItem*>& queue, QList<DEV*>& devlist, QList<IconLabel*>& icllist,QMutex& m)
:val_i(val),DevMap(map), LogInfoQueue(queue), DevInfoList(devlist), IconLabelList(icllist),mutex(m)
{}
~Background();
protected:
void run(void);
private:
DEVMAP& DevMap;
QQueue<LogInfoItem*>&LogInfoQueue;
QList<DEV*>& DevInfoList;
QList<IconLabel*>& IconLabelList;
int& val_i;
QMutex& mutex;
void rcv();
};
Background.cpp
#include "background.h"
Background::~Background()
{
LogFile->close();
}
void Background::run(void)
{
initFile();
while(1)
{
msleep(5);
rcv();
}
}
void Background::rcv()
{
mutex.lock();
...
...//access DevMap, LogInfoQueue, DevInfoList, IconLabelList and val_i;
...
mutex.unlock();
}
MainWindow:(MainWindow has Background* back as property)
void MainWindow::initThread()
{
back = new Background(val_i, dev_map, logDisplayQueue, devInfoList, iconLabelList, mutex);
back->start();
}
void MainWindow::on_Create_triggered()
{
mutex.lock();
...
...//access DevMap, LogInfoQueue, DevInfoList, IconLabelList and val_i;
...
mutex.unlock();
}
I have found the reason, which is more subtle.
(I use some codes written by others but believe it is not broken, what I got totally wrong! :) )
The broken codes:
#define DATABUFLEN 96
typedef struct Para//totally 100bytes
{
UINT8 type;
UINT8 len;
UINT8 inType;
UINT8 inLen;
UINT8 value[DATABUFLEN];//96 bytes here
}ERRORTLV;
class BitState
{
public:
UINT8 dataBuf[DATABUFLEN];
......
};
And the function using it:
bool BitState::rcvData() //the function crosses bound of array
{
UINT8 data[12] =
{
0x72, 0x0A, 0x97, 0x08,
0x06, 0x0A, 0x0C, 0x0F,
0x1E, 0x2A, 0x50, 0x5F,
}; //only 12 bytes
UINT32 dataLen = 110;
memcpy(this->dataBuf, data, dataLen); //copy 110 bytes to dataBuf //but no error or warning from compiler, and no runtime error indicates the cross
}
bool BitState::parseData(BitLog* bitLog)//pass pointer of dataBuf to para_tmp, but only use 0x08 + 4 = 12 bytes of dataBuf
{
Para* para_tmp;
if(*(this->dataBuf) == 0x77)
{
para_tmp = (ERRORTLV*)this->dataBuf;
}
if(para_tmp->type != 0x72 || para_tmp->inType != 0x97 || (para_tmp->len - para_tmp->inLen) != 2) // inLen == 0x08
{
return false;
}
else
{
//parse dataBuf according to Para's structure
this->bitState.reset();
for(int i = 0; i < para_tmp->inLen; i++) // inLen == 0x08 only !!!
{
this->bitState[para_tmp->value[i]-6] = 1;
}
if(this->bitState.none())
this->setState(NORMAL);
else
this->setState(FAULT);
QString currentTime = (QDateTime::currentDateTime()).toString("yyyy.MM.dd hh:mm:ss.zzz");
string sysTime = string((const char *)currentTime.toLocal8Bit());
this->setCurTime(sysTime);
this->addLog(sysTime, bitLog);
}
return true;
}
bool BitState::addLog(std::string sysTime, BitLog* bitLog)// this function is right
{
bitLog->basicInfo = this->basicInfo;//not in data Buf, already allocated and initialized, (right)
bitLog->bitState = this->bitState; //state is set by setState(..)
bitLog->rcvTime = sysTime; //time
return true;
}
Generally speaking, the program allocates 96 bytes to a byte array, but use 'memcpy(...)' to copy 110 bytes to the array, later uses only 12 bytes of the array.
All kinds of crashes appear, which are confusing and frustrating...:( :( :(

Multithreaded program becomes nonresponsive on _multiple_ CPUs, but fine on a single CPU (when updating a ListView)

Update: I've reproduced the problem! Scroll lower to see the code.
Quick Notes
My Core i5 CPU has 2 cores, hyperthreading.
If I call SetProcessAffinityMask(GetCurrentProcess(), 1), everything is fine, even though the program is still multithreaded.
If I don't do that, and the program is running on Windows XP (it's fine on Windows 7 x64!), my GUI starts locking up for several seconds while I'm scrolling the list view and the icons are loading.
The Problem
Basically, when I run the program posted below (a reduced version of my original code) on Windows XP (Windows 7 is fine), unless I force the same logical CPU for all my threads, the program UI starts lagging behind by half a second or so.
(Note: Lots of edits to this post here, as I investigated the problem further.)
Note that the number of threads is the same -- only the affinity mask is different.
I've tried this out using two different methods of message-passing: the built-in GetMessage as well as my own BackgroundWorker.
The result? BackgroundWorker benefits from affinity for 1 logical CPU (virtually no lag), whereas GetMessage is completely hurt by this, (lag is now many seconds long).
I can't figure out why that would be happening -- shouldn't multiple CPUs work better than a single CPU?!
Why would there be such a lag, when the number of threads is the same?
More stats:
GetLogicalProcessorInformation returns:
0x0: {ProcessorMask=0x0000000000000003 Relationship=RelationProcessorCore ...}
0x1: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x2: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x3: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x4: {ProcessorMask=0x000000000000000f Relationship=RelationProcessorPackage ...}
0x5: {ProcessorMask=0x000000000000000c Relationship=RelationProcessorCore ...}
0x6: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x7: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x8: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x9: {ProcessorMask=0x000000000000000f Relationship=RelationCache ...}
0xa: {ProcessorMask=0x000000000000000f Relationship=RelationNumaNode ...}
The Code
The code below should shows this problem on Windows XP SP3.
(At least, it does on my computer!)
Compare these two:
Run the program normally, then scroll. You should see lag.
Run the program with the affinity command-line argument, then scroll. It should be almost completely smooth.
Why would this happen?
#define _WIN32_WINNT 0x502
#include <tchar.h>
#include <Windows.h>
#include <CommCtrl.h>
#pragma comment(lib, "kernel32.lib")
#pragma comment(lib, "comctl32.lib")
#pragma comment(lib, "user32.lib")
LONGLONG startTick = 0;
LONGLONG QPC()
{ LARGE_INTEGER v; QueryPerformanceCounter(&v); return v.QuadPart; }
LONGLONG QPF()
{ LARGE_INTEGER v; QueryPerformanceFrequency(&v); return v.QuadPart; }
bool logging = false;
bool const useWindowMessaging = true; // GetMessage() or BackgroundWorker?
bool const autoScroll = false; // for testing
class BackgroundWorker
{
struct Thunk
{
virtual void operator()() = 0;
virtual ~Thunk() { }
};
class CSLock
{
CRITICAL_SECTION& cs;
public:
CSLock(CRITICAL_SECTION& criticalSection)
: cs(criticalSection)
{ EnterCriticalSection(&this->cs); }
~CSLock() { LeaveCriticalSection(&this->cs); }
};
template<typename T>
class ScopedPtr
{
T *p;
ScopedPtr(ScopedPtr const &) { }
ScopedPtr &operator =(ScopedPtr const &) { }
public:
ScopedPtr() : p(NULL) { }
explicit ScopedPtr(T *p) : p(p) { }
~ScopedPtr() { delete p; }
T *operator ->() { return p; }
T &operator *() { return *p; }
ScopedPtr &operator =(T *p)
{
if (this->p != NULL) { __debugbreak(); }
this->p = p;
return *this;
}
operator T *const &() { return this->p; }
};
Thunk **const todo;
size_t nToDo;
CRITICAL_SECTION criticalSection;
DWORD tid;
HANDLE hThread, hSemaphore;
volatile bool stop;
static size_t const MAX_TASKS = 1 << 18; // big enough for testing
static DWORD CALLBACK entry(void *arg)
{ return ((BackgroundWorker *)arg)->process(); }
public:
BackgroundWorker()
: nToDo(0), todo(new Thunk *[MAX_TASKS]), stop(false), tid(0),
hSemaphore(CreateSemaphore(NULL, 0, 1 << 30, NULL)),
hThread(CreateThread(NULL, 0, entry, this, CREATE_SUSPENDED, &tid))
{
InitializeCriticalSection(&this->criticalSection);
ResumeThread(this->hThread);
}
~BackgroundWorker()
{
// Clear all the tasks
this->stop = true;
this->clear();
LONG prev;
if (!ReleaseSemaphore(this->hSemaphore, 1, &prev) ||
WaitForSingleObject(this->hThread, INFINITE) != WAIT_OBJECT_0)
{ __debugbreak(); }
CloseHandle(this->hSemaphore);
CloseHandle(this->hThread);
DeleteCriticalSection(&this->criticalSection);
delete [] this->todo;
}
void clear()
{
CSLock lock(this->criticalSection);
while (this->nToDo > 0)
{
delete this->todo[--this->nToDo];
}
}
unsigned int process()
{
DWORD result;
while ((result = WaitForSingleObject(this->hSemaphore, INFINITE))
== WAIT_OBJECT_0)
{
if (this->stop) { result = ERROR_CANCELLED; break; }
ScopedPtr<Thunk> next;
{
CSLock lock(this->criticalSection);
if (this->nToDo > 0)
{
next = this->todo[--this->nToDo];
this->todo[this->nToDo] = NULL; // for debugging
}
}
if (next) { (*next)(); }
}
return result;
}
template<typename Func>
void add(Func const &func)
{
CSLock lock(this->criticalSection);
struct FThunk : public virtual Thunk
{
Func func;
FThunk(Func const &func) : func(func) { }
void operator()() { this->func(); }
};
DWORD exitCode;
if (GetExitCodeThread(this->hThread, &exitCode) &&
exitCode == STILL_ACTIVE)
{
if (this->nToDo >= MAX_TASKS) { __debugbreak(); /*too many*/ }
if (this->todo[this->nToDo] != NULL) { __debugbreak(); }
this->todo[this->nToDo++] = new FThunk(func);
LONG prev;
if (!ReleaseSemaphore(this->hSemaphore, 1, &prev))
{ __debugbreak(); }
}
else { __debugbreak(); }
}
};
LRESULT CALLBACK MyWindowProc(
HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
enum { IDC_LISTVIEW = 101 };
switch (uMsg)
{
case WM_CREATE:
{
RECT rc; GetClientRect(hWnd, &rc);
HWND const hWndListView = CreateWindowEx(
WS_EX_CLIENTEDGE, WC_LISTVIEW, NULL,
WS_CHILDWINDOW | WS_VISIBLE | LVS_REPORT |
LVS_SHOWSELALWAYS | LVS_SINGLESEL | WS_TABSTOP,
rc.left, rc.top, rc.right - rc.left, rc.bottom - rc.top,
hWnd, (HMENU)IDC_LISTVIEW, NULL, NULL);
int const cx = GetSystemMetrics(SM_CXSMICON),
cy = GetSystemMetrics(SM_CYSMICON);
HIMAGELIST const hImgList =
ImageList_Create(
GetSystemMetrics(SM_CXSMICON),
GetSystemMetrics(SM_CYSMICON),
ILC_COLOR32, 1024, 1024);
ImageList_AddIcon(hImgList, (HICON)LoadImage(
NULL, IDI_INFORMATION, IMAGE_ICON, cx, cy, LR_SHARED));
LVCOLUMN col = { LVCF_TEXT | LVCF_WIDTH, 0, 500, TEXT("Name") };
ListView_InsertColumn(hWndListView, 0, &col);
ListView_SetExtendedListViewStyle(hWndListView,
LVS_EX_DOUBLEBUFFER | LVS_EX_FULLROWSELECT | LVS_EX_GRIDLINES);
ListView_SetImageList(hWndListView, hImgList, LVSIL_SMALL);
for (int i = 0; i < (1 << 11); i++)
{
TCHAR text[128]; _stprintf(text, _T("Item %d"), i);
LVITEM item =
{
LVIF_IMAGE | LVIF_TEXT, i, 0, 0, 0,
text, 0, I_IMAGECALLBACK
};
ListView_InsertItem(hWndListView, &item);
}
if (autoScroll)
{
SetTimer(hWnd, 0, 1, NULL);
}
break;
}
case WM_TIMER:
{
HWND const hWndListView = GetDlgItem(hWnd, IDC_LISTVIEW);
RECT rc; GetClientRect(hWndListView, &rc);
if (!ListView_Scroll(hWndListView, 0, rc.bottom - rc.top))
{
KillTimer(hWnd, 0);
}
break;
}
case WM_NULL:
{
HWND const hWndListView = GetDlgItem(hWnd, IDC_LISTVIEW);
int const iItem = (int)lParam;
if (logging)
{
_tprintf(_T("#%I64lld ms:")
_T(" Received: #%d\n"),
(QPC() - startTick) * 1000 / QPF(), iItem);
}
int const iImage = 0;
LVITEM const item = {LVIF_IMAGE, iItem, 0, 0, 0, NULL, 0, iImage};
ListView_SetItem(hWndListView, &item);
ListView_Update(hWndListView, iItem);
break;
}
case WM_NOTIFY:
{
LPNMHDR const pNMHDR = (LPNMHDR)lParam;
switch (pNMHDR->code)
{
case LVN_GETDISPINFO:
{
NMLVDISPINFO *const pInfo = (NMLVDISPINFO *)lParam;
struct Callback
{
HWND hWnd;
int iItem;
void operator()()
{
if (logging)
{
_tprintf(_T("#%I64lld ms: Sent: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
iItem);
}
PostMessage(hWnd, WM_NULL, 0, iItem);
}
};
if (pInfo->item.iImage == I_IMAGECALLBACK)
{
if (useWindowMessaging)
{
DWORD const tid =
(DWORD)GetWindowLongPtr(hWnd, GWLP_USERDATA);
PostThreadMessage(
tid, WM_NULL, 0, pInfo->item.iItem);
}
else
{
Callback callback = { hWnd, pInfo->item.iItem };
if (logging)
{
_tprintf(_T("#%I64lld ms: Queued: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
pInfo->item.iItem);
}
((BackgroundWorker *)
GetWindowLongPtr(hWnd, GWLP_USERDATA))
->add(callback);
}
}
break;
}
}
break;
}
case WM_CLOSE:
{
PostQuitMessage(0);
break;
}
}
return DefWindowProc(hWnd, uMsg, wParam, lParam);
}
DWORD WINAPI BackgroundWorkerThread(LPVOID lpParameter)
{
HWND const hWnd = (HWND)lpParameter;
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0 && msg.message != WM_QUIT)
{
if (msg.message == WM_NULL)
{
PostMessage(hWnd, msg.message, msg.wParam, msg.lParam);
}
}
return 0;
}
int _tmain(int argc, LPTSTR argv[])
{
startTick = QPC();
bool const affinity = argc >= 2 && _tcsicmp(argv[1], _T("affinity")) == 0;
if (affinity)
{ SetProcessAffinityMask(GetCurrentProcess(), 1 << 0); }
bool const log = logging; // disable temporarily
logging = false;
WNDCLASS wndClass =
{
0, &MyWindowProc, 0, 0, NULL, NULL, LoadCursor(NULL, IDC_ARROW),
GetSysColorBrush(COLOR_3DFACE), NULL, TEXT("MyClass")
};
HWND const hWnd = CreateWindow(
MAKEINTATOM(RegisterClass(&wndClass)),
affinity ? TEXT("Window (1 CPU)") : TEXT("Window (All CPUs)"),
WS_OVERLAPPEDWINDOW | WS_VISIBLE, CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, NULL, NULL);
BackgroundWorker iconLoader;
DWORD tid = 0;
if (useWindowMessaging)
{
CreateThread(NULL, 0, &BackgroundWorkerThread, (LPVOID)hWnd, 0, &tid);
SetWindowLongPtr(hWnd, GWLP_USERDATA, tid);
}
else { SetWindowLongPtr(hWnd, GWLP_USERDATA, (LONG_PTR)&iconLoader); }
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0)
{
if (!IsDialogMessage(hWnd, &msg))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
if (msg.message == WM_TIMER ||
!PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE))
{ logging = log; }
}
PostThreadMessage(tid, WM_QUIT, 0, 0);
return 0;
}
Based on the inter-thread timings you posted at http://ideone.com/fa2fM, it looks like there is a fairness issue at play here. Based solely on this assumption, here is my reasoning as to the apparent cause of the perceived lag and a potential solution to the problem.
It looks like there is a large number of LVN_GETDISPINFO messages being generated and processed on one thread by the window proc, and while the background worker thread is able to keep up and post messages back to the window at the same rate, the WM_NULL messages it posts are so far back in the queue that it takes time before they get handled.
When you set the processor affinity mask, you introduce more fairness into the system because the same processor must service both threads, which will limit the rate at which LVN_GETDISPINFO messages are generated relative to the non-affinity case. This means that the window proc message queue is likely not as deep when you post your WM_NULL messages, which in turn means that they will be processed 'sooner'.
It seems that you need to somehow bypass the queueing effect. Using SendMessage, SendMessageCallback or SendNotifyMessage instead of PostMessage may be ways to do this. In the SendMessage case, your worker thread will block until the window proc thread is finished its current message and processes the sent WM_NULL message, but you will be able to inject your WM_NULL messages more evenly into the message processing flow. See this page for an explanation of queued vs. non-queued message handling.
If you choose to use SendMessage, but you don't want to limit the rate at which you can obtain icons due to the blocking nature of SendMessage, then you can use a third thread. Your I/O thread would post messages to the third thread, while the third thread uses SendMessage to inject icon updates into the UI thread. In this fashion, you have control of the queue of satisfied icon requests, instead of interleaving them into the window proc message queue.
As for the difference in behaviour between Win7 and WinXP, there may be a number of reasons why you don't appear to see this effect on Win7. It could be that the list view common control is implemented differently and limits the rate at which LVN_GETDISPINFO messages are generated. Or perhaps the thread scheduling mechanism in Win7 switches thread contexts more frequently or more fairly.
EDIT:
Based on your latest change, try the following:
...
struct Callback
{
HWND hWnd;
int iItem;
void operator()()
{
if (logging)
{
_tprintf(_T("#%I64lld ms: Sent: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
iItem);
}
SendNotifyMessage(hWnd, WM_NULL, 0, iItem); // <----
}
};
...
DWORD WINAPI BackgroundWorkerThread(LPVOID lpParameter)
{
HWND const hWnd = (HWND)lpParameter;
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0 && msg.message != WM_QUIT)
{
if (msg.message == WM_NULL)
{
SendNotifyMessage(hWnd, msg.message, msg.wParam, msg.lParam); // <----
}
}
return 0;
}
EDIT 2:
After establishing that the LVN_GETDISPINFO message are being placed in the queue using SendMessage instead of PostMessage, we can't use SendMessage ourselves to bypass them.
Still proceeding on the assumption that there is a glut of messages being processed by the wndproc before the icon results are being sent back from the worker thread, we need another way to get those updates handled as soon as they are ready.
Here's the idea:
Worker thread places results in a synchronized queue-like data structure, and then posts (using PostMessage) a WM_NULL message to the wndproc (to ensure that the wndproc gets executed sometime in the future).
At the top of the wndproc (before the case statements), the UI thread checks the synchronized queue-like data structure to see if there are any results, and if so, removes one or more results from the queue-like data structure and processes them.
The issue has less to do with thread affinity and more to do with telling the listview that it needs to update the list item every time you update it. Because you do not add the LVIF_DI_SETITEM flag to pInfo->item.mask in your LVN_GETDISPINFO handler, and because you call ListView_Update manually, when you call ListView_Update, the list view invalidates any item that still has its iImage set to I_IMAGECALLBACK.
You can fix this in one of two ways (or a combination of both):
Remove ListView_Update from your WM_NULL handler. The list view will automatically redraw the items you set the image for in your WM_NULL handler when you set them, and it will not attempt to redraw items you haven't set the image for more than once.
Set LVIF_DI_SETITEM flag in pInfo->item.mask in your LVN_GETDISPINFO handler and set pInfo->item.iImage to a value that is not I_IMAGECALLBACK.
I repro'd similar awful behavior doing a full page scroll on Vista. Doing either of the above fixed the issue while still updating the icons asynchronously.
Its plausible to suggest that this is related to XPs hyper threading/logical core scheduling and I will second IvoTops suggestion to try this with hyper-threading disabled. Please try this and let us know.
Why? Because:
a) Logical cores offer bad parallelism for CPU bound tasks. Running multiple CPU bound threads on two logical HT cores on the same physical core is detrimental to performance. See for example, this intel paper - it explains how enabling HT might cause typical server threads to incur an increase in latency or processing time for each request (while improving net throughput.)
b) Windows 7 does indeed have some HT/SMT (symmetrical multi threading) scheduling improvements. Mark Russinovich's slides here mention this briefly. Although they claim that XP scheduler is SMT aware, the fact that Windows 7 explicitly fixes something around this, implies there could be something lacking in XP. So I'm guessing that the OS isn't setting the thread affinity to the second core appropriately. (perhaps because the second core might not be idle at the instant of scheduling your second thread, to speculate wildly).
You wrote "I just tried setting the CPU affinity of the process (or even the individual threads) to all potential combinations I could think of, on the same and on different logical CPUs".
Can we try to verify that the execution is actually on the second core, once you set this?
You can visually check this in task manager or perfmon/perf counters
Maybe post the code where you set the affinity of the threads (I note that you are not checking the return value on SetProcessorAffinity, do check that as well.)
If Windows perf counters dont help, Intel's VTune Performance Analyzer is helpful for exactly this kind of stuff.
I think you can force the thread affinity manually using task manager.
One more thing: Your core i5 is either Nehalem or SandyBridge micro-architecture. Nehalem and later HT implementation is significantly different from the prior generation architectures (Core,etc). In fact Microsoft recommended disabling HT for running Biztalk server on pre-Nehalem systems. So perhaps Windows XP does not handle the new HT architecture well.
This might be a hyperthreading bug. To check if that's what causing it run your faulty program with Hyperthreading turned off (in the bios you can usually switch it off). I have run into two issues in the last five years that only surfaced when hyperthreading was enabled.

Resources