multiple thread in CMSIS RTOS - STM32 nucleo L053R8 - multithreading

Today I developing RTOS (CMSIS RTOS) for kit STM32 nucleo L053R8. I have issue relate to multiple task.
I create 4 task(task_1, task_2, task_3, task_4), however only 3 task run.
This is part of my code:
#include "main.h"
#include "stm32l0xx_hal.h"
#include "cmsis_os.h"
osMutexId stdio_mutex;
osMutexDef(stdio_mutex);
int main(void){
.....
stdio_mutex = osMutexCreate(osMutex(stdio_mutex));
osThreadDef(defaultTask_1, StartDefaultTask_1, osPriorityNormal, 0, 128);
defaultTaskHandle = osThreadCreate(osThread(defaultTask_1), NULL);
osThreadDef(defaultTask_2, StartDefaultTask_2, osPriorityNormal, 0, 128);
defaultTaskHandle = osThreadCreate(osThread(defaultTask_2), NULL);
osThreadDef(defaultTask_3, StartDefaultTask_3, osPriorityNormal, 0, 128);
defaultTaskHandle = osThreadCreate(osThread(defaultTask_3), NULL);
osThreadDef(defaultTask_4, StartDefaultTask_4, osPriorityNormal, 0, 600);
defaultTaskHandle = osThreadCreate(osThread(defaultTask_4), NULL);
}
void StartDefaultTask_1(void const * argument){
for(;;){
osMutexWait(stdio_mutex, osWaitForever);
printf("%s\n\r", __func__);
osMutexRelease(stdio_mutex);
osDelay(1000);
}
}
void StartDefaultTask_2(void const * argument){
for(;;){
osMutexWait(stdio_mutex, osWaitForever);
printf("%s\n\r", __func__);
osMutexRelease(stdio_mutex);
osDelay(1000);
}
}
void StartDefaultTask_3(void const * argument){
for(;;){
osMutexWait(stdio_mutex, osWaitForever);
printf("%s\n\r", __func__);
osMutexRelease(stdio_mutex);
osDelay(1000);
}
}
void StartDefaultTask_4(void const * argument){
for(;;){
osMutexWait(stdio_mutex, osWaitForever);
printf("%s\n\r", __func__);
osMutexRelease(stdio_mutex);
osDelay(1000);
}
}
this is reslut in console (uart):
when I change stack size for task 4 from 600 -> 128 as below:
osThreadDef(defaultTask_4, StartDefaultTask_4, osPriorityNormal, 0, 128);
defaultTaskHandle = osThreadCreate(osThread(defaultTask_4), NULL);
then don't have any task run.
Actualy I want make many thread for my application, however this issue cause difficult to implement.
Could you let me know root cause of prblem? and how to resolve it.
Thank you advance!!

There is no common easy method of the stack calculation. It depends on many factors.
I would suggest to avoid stack greedy functions like printf scanf etc. Write your own ones, not as "smart" and universal but less resources greedy.
Avoid large local variables. Be very careful when you allocate the memory

As your Suggestions, I checked by debug and see root cause is heap size is small.
I resolve by 2 method
increase heap size: #define configTOTAL_HEAP_SIZE ((size_t)5120)
decrease stack size: #define configMINIMAL_STACK_SIZE ((uint16_t)64)
osThreadDef(defaultTask_6, StartDefaultTask_6, osPriorityNormal, 0, 64);
Do you know how to determine max of heap size? Please let me known.
Thank you so much

Related

Anybody also noticed that Windows 11 multithreaded font usage causes a heap corruption in GDI?

we have multiple printing applications that draw text into (an enhmetafile) GDI. When these painting actions (in short something like CreateFont(), SelectFont(), DrawText(), GetObject(), ... DeselectFont(), DeleteFont()) are done in threads, the application crashes very soon in DeleteObject() of a font handle. If the threads are synchronized, it does not happen. Under Windows 10 there's is no problem at all.
Reproduction by some simple code is not trivial, and our code is a little complex querying the LOGFONT, querying current object, ... to lay out the page to paint into (including wordbreak etc), and a simple multithreaded sample does not show this behaviour. It must be an unfortunate combination of the font APIs (or a combination with other GDI object APIs).
Trace of the crash is always in the same place, a corrupted heap caused by the DeleteObject API:
ntdll.dll!_RtlReportCriticalFailure#12() Unknown
ntdll.dll!_RtlpReportHeapFailure#4() Unknown
ntdll.dll!_RtlpHpHeapHandleError#12() Unknown
ntdll.dll!_RtlpLogHeapFailure#24() Unknown
ntdll.dll!_RtlpFreeHeapInternal#20() Unknown
ntdll.dll!RtlFreeHeap() Unknown
gdi32full.dll!_vFreeCFONTCrit#4() Unknown
gdi32full.dll!_vDeleteLOCALFONT#4() Unknown
gdi32.dll!_InternalDeleteObject#4() Unknown
gdi32.dll!_DeleteObject#4() Unknown
I do write it here in the hope of finding someone who has the same problem - or to be found by someone looking for others (like me here) ;)
OK, the culprit for our case of printing in a metafile DC has been found: the APIs GetTextExtentPoint() and its alias GetTextExtentPoint32() are not threadsafe in Windows 11 and do corrupt the GDI heap if GDI text operations are used by multiple threads.
More findings:
DC is a metafile DC:
heap becomes corrupted if `GetTextExtentPoint()ยด is being used
everything works without this API
DC is a Window DC:
the code always hangs in an endless loop in ExtTextOut() or GetTextExtentPoint() (if opted in) in the application's painting loop. BTW: Even in Windows 10...! Not a deadlock, but full processor load (one at least one of the CPUs, so there's some kind of synchronization)
The code is attached, you may play around with the macros SHOW_ERROR and USE_METAFILE...:
#include "stdafx.h"
#include <windows.h>
#include <windowsx.h>
#include <process.h>
#include <assert.h>
#define SHOW_ERROR 1
#define USE_METAFILE 0
#define sizeofTSTR(b) (sizeof(b)/sizeof(TCHAR))
const int THREADCOUNT = 20;
struct scThreadData
{
public:
volatile LONG* _pnFinished;
TCHAR _szFilename[MAX_PATH];
DWORD _dwFileSize;
};
void _cdecl g_ThreadFunction(void* pParams)
{
scThreadData* pThreadData = reinterpret_cast<scThreadData*>(pParams);
PRINTDLG pd = {0};
HDC hDC = NULL;
::Sleep(rand() % 1000);
printf("start %d\n", ::GetCurrentThreadId());
#if USE_METAFILE
pd.lStructSize = sizeof(pd);
pd.Flags = PD_RETURNDC | PD_RETURNDEFAULT;
::PrintDlg(&pd);
RECT rcPage = {0,0,10000,10000};
hDC = ::CreateEnhMetaFile(pd.hDC, pThreadData->_szFilename, &rcPage, _T("Hallo"));
#else
hDC = ::GetDC(NULL);
#endif
for (int i = 0; i < 20000; ++i)
{
HFONT newFont = ::CreateFont(-100, 0, 0, 0, 0, 0, 0, 0, DEFAULT_CHARSET, OUT_TT_PRECIS, CLIP_DEFAULT_PRECIS, ANTIALIASED_QUALITY, 0, L"Arial");
HFONT oldFont = SelectFont(hDC, newFont);
::ExtTextOut(hDC, 0, 0, 0, NULL, _T("x"), 1, NULL);
#if SHOW_ERROR
SIZE sz = {};
::GetTextExtentPoint(hDC, L"Hallo", 5, &sz); // <<-- causes GDI heap to be corrupted
#endif
SelectFont(hDC, oldFont);
::DeleteFont(newFont);
}
#if USE_METAFILE
::DeleteEnhMetaFile(::CloseEnhMetaFile(hDC));
::DeleteDC(pd.hDC);
#else
::DeleteDC(hDC);
#endif
::DeleteFile(pThreadData->_szFilename);
printf("end %d\n", ::GetCurrentThreadId());
// done
::InterlockedIncrement(pThreadData->_pnFinished);
}
int _tmain(int argc, _TCHAR* argv[])
{
volatile LONG nFinished(0);
scThreadData TD[THREADCOUNT];
TCHAR szUserName[30];
TCHAR szComputerName[30];
DWORD dwLen;
dwLen = sizeofTSTR(szUserName);
::GetUserName(szUserName,&dwLen);
dwLen = sizeofTSTR(szComputerName);
::GetComputerName(szComputerName,&dwLen);
for (int nThread = 0; nThread < THREADCOUNT; ++nThread)
{
TD[nThread]._pnFinished = &nFinished;
_stprintf_s(TD[nThread]._szFilename,MAX_PATH,_T("test-%s-%d.emf"),szUserName,nThread);
_beginthread(g_ThreadFunction,10000,(void*)&TD[nThread]);
::Sleep(200);
}
Sleep(1000);
while (nFinished < THREADCOUNT)
{
::Sleep(100);
}
return 0;
}

when ddegetdata() method call application memory increase continuously

i use dde excel if any update available in excel the application get notify but after DdeGetData() called the application memory increasing continuously . it increases very fast . i am new with vc++ programming i tried to find out solution and googled it but not found any appropriate solution...
here is my code
void DDERequest(DWORD idInst, HCONV hConv, char* szItem, char* sDesc)
{
HSZ hszItem = DdeCreateStringHandle(idInst, szItem, 0);
HDDEDATA hData = DdeClientTransaction(NULL,0,hConv,hszItem,CF_TEXT,
XTYP_REQUEST,5000 , NULL);
if (hData==NULL)
{
printf("Request failed: %s\n", szItem);
}
else
{
char szResult[255];
DdeGetData(hData, (unsigned char *)szResult, 255, 0);
printf("%s%s\n", sDesc, szResult);
}
}

what could be causing this opengl segfault in glBufferSubData?

I've been whittling down this segfault for a while, and here's a pretty minimal reproducible example on my machine (below). I have the sinking feeling that it's a driver bug, but I'm very unfamiliar with OpenGL, so it's more likely I'm just doing something wrong.
Is this correct OpenGL 3.3 code? Should be fine regardless of platform and compiler and all that?
Here's the code, compiled with gcc -ggdb -lGL -lSDL2
#include <stdio.h>
#include "GL/gl.h"
#include "GL/glext.h"
#include "SDL2/SDL.h"
// this section is for loading OpenGL things from later versions.
typedef void (APIENTRY *GLGenVertexArrays) (GLsizei n, GLuint *arrays);
typedef void (APIENTRY *GLGenBuffers) (GLsizei n, GLuint *buffers);
typedef void (APIENTRY *GLBindVertexArray) (GLuint array);
typedef void (APIENTRY *GLBindBuffer) (GLenum target, GLuint buffer);
typedef void (APIENTRY *GLBufferData) (GLenum target, GLsizeiptr size, const GLvoid* data, GLenum usage);
typedef void (APIENTRY *GLBufferSubData) (GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data);
typedef void (APIENTRY *GLGetBufferSubData) (GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data);
typedef void (APIENTRY *GLFlush) (void);
typedef void (APIENTRY *GLFinish) (void);
GLGenVertexArrays glGenVertexArrays = NULL;
GLGenBuffers glGenBuffers = NULL;
GLBindVertexArray glBindVertexArray = NULL;
GLBindBuffer glBindBuffer = NULL;
GLBufferData glBufferData = NULL;
GLBufferSubData glBufferSubData = NULL;
GLGetBufferSubData glGetBufferSubData = NULL;
void load_gl_pointers() {
glGenVertexArrays = (GLGenVertexArrays)SDL_GL_GetProcAddress("glGenVertexArrays");
glGenBuffers = (GLGenBuffers)SDL_GL_GetProcAddress("glGenBuffers");
glBindVertexArray = (GLBindVertexArray)SDL_GL_GetProcAddress("glBindVertexArray");
glBindBuffer = (GLBindBuffer)SDL_GL_GetProcAddress("glBindBuffer");
glBufferData = (GLBufferData)SDL_GL_GetProcAddress("glBufferData");
glBufferSubData = (GLBufferSubData)SDL_GL_GetProcAddress("glBufferSubData");
glGetBufferSubData = (GLGetBufferSubData)SDL_GL_GetProcAddress("glGetBufferSubData");
}
// end OpenGL loading stuff
#define CAPACITY (1 << 8)
// return nonzero if an OpenGL error has occurred.
int opengl_checkerr(const char* const label) {
GLenum err;
switch(err = glGetError()) {
case GL_INVALID_ENUM:
printf("GL_INVALID_ENUM");
break;
case GL_INVALID_VALUE:
printf("GL_INVALID_VALUE");
break;
case GL_INVALID_OPERATION:
printf("GL_INVALID_OPERATION");
break;
case GL_INVALID_FRAMEBUFFER_OPERATION:
printf("GL_INVALID_FRAMEBUFFER_OPERATION");
break;
case GL_OUT_OF_MEMORY:
printf("GL_OUT_OF_MEMORY");
break;
case GL_STACK_UNDERFLOW:
printf("GL_STACK_UNDERFLOW");
break;
case GL_STACK_OVERFLOW:
printf("GL_STACK_OVERFLOW");
break;
default: return 0;
}
printf(" %s\n", label);
return 1;
}
int main(int nargs, const char* args[]) {
printf("initializing..\n");
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_Window* const w =
SDL_CreateWindow(
"broken",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
1, 1,
SDL_WINDOW_OPENGL
);
if(w == NULL) {
printf("window was null\n");
return 0;
}
SDL_GLContext context = SDL_GL_CreateContext(w);
if(context == NULL) {
printf("context was null\n");
return 0;
}
load_gl_pointers();
if(opengl_checkerr("init")) {
return 1;
}
printf("GL_VENDOR: %s\n", glGetString(GL_VENDOR));
printf("GL_RENDERER: %s\n", glGetString(GL_RENDERER));
float* const vs = malloc(CAPACITY * sizeof(float));
memset(vs, 0, CAPACITY * sizeof(float));
unsigned int i = 0;
while(i < 128000) {
GLuint vertex_array;
GLuint vertex_buffer;
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
glGenBuffers(1, &vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
if(opengl_checkerr("gen/binding")) {
return 1;
}
glBufferData(
GL_ARRAY_BUFFER,
CAPACITY * sizeof(float),
vs, // initialize with `vs` just to make sure it's allocated.
GL_DYNAMIC_DRAW
);
// verify that the memory is allocated by reading it back into `vs`.
glGetBufferSubData(
GL_ARRAY_BUFFER,
0,
CAPACITY * sizeof(float),
vs
);
if(opengl_checkerr("creating buffer")) {
return 1;
}
glFlush();
glFinish();
// segfault occurs here..
glBufferSubData(
GL_ARRAY_BUFFER,
0,
CAPACITY * sizeof(float),
vs
);
glFlush();
glFinish();
++i;
}
return 0;
}
When I bump the iterations from 64k to 128k, I start getting:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff754c859 in __memcpy_sse2_unaligned () from /usr/lib/libc.so.6
(gdb) bt
#0 0x00007ffff754c859 in __memcpy_sse2_unaligned () from /usr/lib/libc.so.6
#1 0x00007ffff2ea154d in ?? () from /usr/lib/xorg/modules/dri/i965_dri.so
#2 0x0000000000400e5c in main (nargs=1, args=0x7fffffffe8d8) at opengl-segfault.c:145
However, I can more than double the capacity (keeping the number of iterations at 64k) without segfaulting.
GL_VENDOR: Intel Open Source Technology Center
GL_RENDERER: Mesa DRI Intel(R) Haswell Mobile
I had a very similar issue when calling glGenTextures and glBindTexture. I tried debugging and when i would try to step through these lines I would get something like:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff26eaaa8 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
Note that prior to adding textures, I could successfully run programs with vbos and vaos and generate meshes fine. After looking into the answer suggesting switching from xf86-video-intel driver to xf86-video-fbdev driver, I would advise against it(There really isn't that much info on this issue or users facing segfaults on linux with integrated intel graphics cards. perhaps a good question to ask the folks over at Intel OpenSource).
The solution I found was to stop using freeglut. Switch to glfw instead. Whether there actually is some problem with the intel linux graphics stack is besides the matter, it seems the solvable problem is freeglut. If you want to use glfw with your machines most recent opengl core profile you need just the following:
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 0);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
Setting forward compat(although ive seen lots of post argueing you shouldnt do this) means mesa is free to select a core context permitted one sets the minimum context to 3.0 or higher. I guess freeglut must be going wrong somewhere in its interactions with mesa, if anyone can share some light on this that would be great!
This is a bug in the intel graphics drivers for Linux. Switching from the xf86-video-intel driver to xf86-video-fbdev driver solves the problem.
Edit: I'm not recommending switching to fbdev, just using it as an experiment to see whether the segfault goes away.

ffmpeg/libavcodec memory management

The libavcodec documentation is not very specific about when to free allocated data and how to free it. After reading through documentation and examples, I've put together the sample program below. There are some specific questions inlined in the source but my general question is, am I freeing all memory properly in the code below? I realize the program below doesn't do any cleanup after errors -- the focus is on final cleanup.
The testfile() function is the one in question.
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
}
#include <cstdio>
using namespace std;
void AVFAIL (int code, const char *what) {
char msg[500];
av_strerror(code, msg, sizeof(msg));
fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
exit(2);
}
#define AVCHECK(f) do { int e = (f); if (e < 0) AVFAIL(e, #f); } while (0)
#define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)
void testfile (const char *filename) {
AVFormatContext *format;
unsigned streamIndex;
AVStream *stream = NULL;
AVCodec *codec;
SwsContext *sws;
AVPacket packet;
AVFrame *rawframe;
AVFrame *rgbframe;
unsigned char *rgbdata;
av_register_all();
// load file header
AVCHECK(av_open_input_file(&format, filename, NULL, 0, NULL));
AVCHECK(av_find_stream_info(format));
// find video stream
for (streamIndex = 0; streamIndex < format->nb_streams && !stream; ++ streamIndex)
if (format->streams[streamIndex]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
stream = format->streams[streamIndex];
if (!stream) {
fprintf(stderr, "no video stream\n");
exit(2);
}
// initialize codec
AVCHECKPTR(codec, avcodec_find_decoder(stream->codec->codec_id));
AVCHECK(avcodec_open(stream->codec, codec));
int width = stream->codec->width;
int height = stream->codec->height;
// initialize frame buffers
int rgbbytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
AVCHECKPTR(rawframe, avcodec_alloc_frame());
AVCHECKPTR(rgbframe, avcodec_alloc_frame());
AVCHECKPTR(rgbdata, (unsigned char *)av_mallocz(rgbbytes));
AVCHECK(avpicture_fill((AVPicture *)rgbframe, rgbdata, PIX_FMT_RGB24, width, height));
// initialize sws (for conversion to rgb24)
AVCHECKPTR(sws, sws_getContext(width, height, stream->codec->pix_fmt, width, height, PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL));
// read all frames fromfile
while (av_read_frame(format, &packet) >= 0) {
int frameok = 0;
if (packet.stream_index == (int)streamIndex)
AVCHECK(avcodec_decode_video2(stream->codec, rawframe, &frameok, &packet));
av_free_packet(&packet); // Q: is this necessary or will next av_read_frame take care of it?
if (frameok) {
sws_scale(sws, rawframe->data, rawframe->linesize, 0, height, rgbframe->data, rgbframe->linesize);
// would process rgbframe here
}
// Q: is there anything i need to free here?
}
// CLEANUP: Q: am i missing anything / doing anything unnecessary?
av_free(sws); // Q: is av_free all i need here?
av_free_packet(&packet); // Q: is this necessary (av_read_frame has returned < 0)?
av_free(rgbframe);
av_free(rgbdata);
av_free(rawframe); // Q: i can just do this once at end, instead of in loop above, right?
avcodec_close(stream->codec); // Q: do i need av_free(codec)?
av_close_input_file(format); // Q: do i need av_free(format)?
}
int main (int argc, char **argv) {
if (argc != 2) {
fprintf(stderr, "usage: %s filename\n", argv[0]);
return 1;
}
testfile(argv[1]);
}
Specific questions:
Is there anything I need to free in the frame processing loop; or will libav take care of memory management there for me?
Is av_free the correct way to free an SwsContext?
The frame loop exits when av_read_frame returns < 0. In that case, do I still need to av_free_packet when it's done?
Do I need to call av_free_packet every time through the loop or will av_read_frame free/reuse the old AVPacket automatically?
I can just av_free the AVFrames at the end of the loop instead of reallocating them each time through, correct? It seems to be working fine, but I'd like to confirm that it's working because it's supposed to, rather than by luck.
Do I need to av_free(codec) the AVCodec or do anything else after avcodec_close on the AVCodecContext?
Do I need to av_free(format) the AVFormatContext or do anything else after av_close_input_file?
I also realize that some of these functions are deprecated in current versions of libav. For reasons that are not relevant here, I have to use them.
Those functions are not just deprecated, they've been removed some time ago. So you should really consider upgrading.
Anyway, as for your questions:
1) no, nothing more to free
2) no, use sws_freeContext()
3) no, if av_read_frame() returns an error then the packet does not contain any valid data
4) yes you have to free the packet after you're done with it and before next av_read_frame() call
5) yes, it's perfectly valid
6) no, the codec context itself is allocated by libavformat so av_close_input_file() is
responsible for freeing it. So nothing more for you to do.
7) no, av_close_input_file() frees the format context so there should be nothing more for you to do.

Multithreaded program becomes nonresponsive on _multiple_ CPUs, but fine on a single CPU (when updating a ListView)

Update: I've reproduced the problem! Scroll lower to see the code.
Quick Notes
My Core i5 CPU has 2 cores, hyperthreading.
If I call SetProcessAffinityMask(GetCurrentProcess(), 1), everything is fine, even though the program is still multithreaded.
If I don't do that, and the program is running on Windows XP (it's fine on Windows 7 x64!), my GUI starts locking up for several seconds while I'm scrolling the list view and the icons are loading.
The Problem
Basically, when I run the program posted below (a reduced version of my original code) on Windows XP (Windows 7 is fine), unless I force the same logical CPU for all my threads, the program UI starts lagging behind by half a second or so.
(Note: Lots of edits to this post here, as I investigated the problem further.)
Note that the number of threads is the same -- only the affinity mask is different.
I've tried this out using two different methods of message-passing: the built-in GetMessage as well as my own BackgroundWorker.
The result? BackgroundWorker benefits from affinity for 1 logical CPU (virtually no lag), whereas GetMessage is completely hurt by this, (lag is now many seconds long).
I can't figure out why that would be happening -- shouldn't multiple CPUs work better than a single CPU?!
Why would there be such a lag, when the number of threads is the same?
More stats:
GetLogicalProcessorInformation returns:
0x0: {ProcessorMask=0x0000000000000003 Relationship=RelationProcessorCore ...}
0x1: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x2: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x3: {ProcessorMask=0x0000000000000003 Relationship=RelationCache ...}
0x4: {ProcessorMask=0x000000000000000f Relationship=RelationProcessorPackage ...}
0x5: {ProcessorMask=0x000000000000000c Relationship=RelationProcessorCore ...}
0x6: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x7: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x8: {ProcessorMask=0x000000000000000c Relationship=RelationCache ...}
0x9: {ProcessorMask=0x000000000000000f Relationship=RelationCache ...}
0xa: {ProcessorMask=0x000000000000000f Relationship=RelationNumaNode ...}
The Code
The code below should shows this problem on Windows XP SP3.
(At least, it does on my computer!)
Compare these two:
Run the program normally, then scroll. You should see lag.
Run the program with the affinity command-line argument, then scroll. It should be almost completely smooth.
Why would this happen?
#define _WIN32_WINNT 0x502
#include <tchar.h>
#include <Windows.h>
#include <CommCtrl.h>
#pragma comment(lib, "kernel32.lib")
#pragma comment(lib, "comctl32.lib")
#pragma comment(lib, "user32.lib")
LONGLONG startTick = 0;
LONGLONG QPC()
{ LARGE_INTEGER v; QueryPerformanceCounter(&v); return v.QuadPart; }
LONGLONG QPF()
{ LARGE_INTEGER v; QueryPerformanceFrequency(&v); return v.QuadPart; }
bool logging = false;
bool const useWindowMessaging = true; // GetMessage() or BackgroundWorker?
bool const autoScroll = false; // for testing
class BackgroundWorker
{
struct Thunk
{
virtual void operator()() = 0;
virtual ~Thunk() { }
};
class CSLock
{
CRITICAL_SECTION& cs;
public:
CSLock(CRITICAL_SECTION& criticalSection)
: cs(criticalSection)
{ EnterCriticalSection(&this->cs); }
~CSLock() { LeaveCriticalSection(&this->cs); }
};
template<typename T>
class ScopedPtr
{
T *p;
ScopedPtr(ScopedPtr const &) { }
ScopedPtr &operator =(ScopedPtr const &) { }
public:
ScopedPtr() : p(NULL) { }
explicit ScopedPtr(T *p) : p(p) { }
~ScopedPtr() { delete p; }
T *operator ->() { return p; }
T &operator *() { return *p; }
ScopedPtr &operator =(T *p)
{
if (this->p != NULL) { __debugbreak(); }
this->p = p;
return *this;
}
operator T *const &() { return this->p; }
};
Thunk **const todo;
size_t nToDo;
CRITICAL_SECTION criticalSection;
DWORD tid;
HANDLE hThread, hSemaphore;
volatile bool stop;
static size_t const MAX_TASKS = 1 << 18; // big enough for testing
static DWORD CALLBACK entry(void *arg)
{ return ((BackgroundWorker *)arg)->process(); }
public:
BackgroundWorker()
: nToDo(0), todo(new Thunk *[MAX_TASKS]), stop(false), tid(0),
hSemaphore(CreateSemaphore(NULL, 0, 1 << 30, NULL)),
hThread(CreateThread(NULL, 0, entry, this, CREATE_SUSPENDED, &tid))
{
InitializeCriticalSection(&this->criticalSection);
ResumeThread(this->hThread);
}
~BackgroundWorker()
{
// Clear all the tasks
this->stop = true;
this->clear();
LONG prev;
if (!ReleaseSemaphore(this->hSemaphore, 1, &prev) ||
WaitForSingleObject(this->hThread, INFINITE) != WAIT_OBJECT_0)
{ __debugbreak(); }
CloseHandle(this->hSemaphore);
CloseHandle(this->hThread);
DeleteCriticalSection(&this->criticalSection);
delete [] this->todo;
}
void clear()
{
CSLock lock(this->criticalSection);
while (this->nToDo > 0)
{
delete this->todo[--this->nToDo];
}
}
unsigned int process()
{
DWORD result;
while ((result = WaitForSingleObject(this->hSemaphore, INFINITE))
== WAIT_OBJECT_0)
{
if (this->stop) { result = ERROR_CANCELLED; break; }
ScopedPtr<Thunk> next;
{
CSLock lock(this->criticalSection);
if (this->nToDo > 0)
{
next = this->todo[--this->nToDo];
this->todo[this->nToDo] = NULL; // for debugging
}
}
if (next) { (*next)(); }
}
return result;
}
template<typename Func>
void add(Func const &func)
{
CSLock lock(this->criticalSection);
struct FThunk : public virtual Thunk
{
Func func;
FThunk(Func const &func) : func(func) { }
void operator()() { this->func(); }
};
DWORD exitCode;
if (GetExitCodeThread(this->hThread, &exitCode) &&
exitCode == STILL_ACTIVE)
{
if (this->nToDo >= MAX_TASKS) { __debugbreak(); /*too many*/ }
if (this->todo[this->nToDo] != NULL) { __debugbreak(); }
this->todo[this->nToDo++] = new FThunk(func);
LONG prev;
if (!ReleaseSemaphore(this->hSemaphore, 1, &prev))
{ __debugbreak(); }
}
else { __debugbreak(); }
}
};
LRESULT CALLBACK MyWindowProc(
HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
enum { IDC_LISTVIEW = 101 };
switch (uMsg)
{
case WM_CREATE:
{
RECT rc; GetClientRect(hWnd, &rc);
HWND const hWndListView = CreateWindowEx(
WS_EX_CLIENTEDGE, WC_LISTVIEW, NULL,
WS_CHILDWINDOW | WS_VISIBLE | LVS_REPORT |
LVS_SHOWSELALWAYS | LVS_SINGLESEL | WS_TABSTOP,
rc.left, rc.top, rc.right - rc.left, rc.bottom - rc.top,
hWnd, (HMENU)IDC_LISTVIEW, NULL, NULL);
int const cx = GetSystemMetrics(SM_CXSMICON),
cy = GetSystemMetrics(SM_CYSMICON);
HIMAGELIST const hImgList =
ImageList_Create(
GetSystemMetrics(SM_CXSMICON),
GetSystemMetrics(SM_CYSMICON),
ILC_COLOR32, 1024, 1024);
ImageList_AddIcon(hImgList, (HICON)LoadImage(
NULL, IDI_INFORMATION, IMAGE_ICON, cx, cy, LR_SHARED));
LVCOLUMN col = { LVCF_TEXT | LVCF_WIDTH, 0, 500, TEXT("Name") };
ListView_InsertColumn(hWndListView, 0, &col);
ListView_SetExtendedListViewStyle(hWndListView,
LVS_EX_DOUBLEBUFFER | LVS_EX_FULLROWSELECT | LVS_EX_GRIDLINES);
ListView_SetImageList(hWndListView, hImgList, LVSIL_SMALL);
for (int i = 0; i < (1 << 11); i++)
{
TCHAR text[128]; _stprintf(text, _T("Item %d"), i);
LVITEM item =
{
LVIF_IMAGE | LVIF_TEXT, i, 0, 0, 0,
text, 0, I_IMAGECALLBACK
};
ListView_InsertItem(hWndListView, &item);
}
if (autoScroll)
{
SetTimer(hWnd, 0, 1, NULL);
}
break;
}
case WM_TIMER:
{
HWND const hWndListView = GetDlgItem(hWnd, IDC_LISTVIEW);
RECT rc; GetClientRect(hWndListView, &rc);
if (!ListView_Scroll(hWndListView, 0, rc.bottom - rc.top))
{
KillTimer(hWnd, 0);
}
break;
}
case WM_NULL:
{
HWND const hWndListView = GetDlgItem(hWnd, IDC_LISTVIEW);
int const iItem = (int)lParam;
if (logging)
{
_tprintf(_T("#%I64lld ms:")
_T(" Received: #%d\n"),
(QPC() - startTick) * 1000 / QPF(), iItem);
}
int const iImage = 0;
LVITEM const item = {LVIF_IMAGE, iItem, 0, 0, 0, NULL, 0, iImage};
ListView_SetItem(hWndListView, &item);
ListView_Update(hWndListView, iItem);
break;
}
case WM_NOTIFY:
{
LPNMHDR const pNMHDR = (LPNMHDR)lParam;
switch (pNMHDR->code)
{
case LVN_GETDISPINFO:
{
NMLVDISPINFO *const pInfo = (NMLVDISPINFO *)lParam;
struct Callback
{
HWND hWnd;
int iItem;
void operator()()
{
if (logging)
{
_tprintf(_T("#%I64lld ms: Sent: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
iItem);
}
PostMessage(hWnd, WM_NULL, 0, iItem);
}
};
if (pInfo->item.iImage == I_IMAGECALLBACK)
{
if (useWindowMessaging)
{
DWORD const tid =
(DWORD)GetWindowLongPtr(hWnd, GWLP_USERDATA);
PostThreadMessage(
tid, WM_NULL, 0, pInfo->item.iItem);
}
else
{
Callback callback = { hWnd, pInfo->item.iItem };
if (logging)
{
_tprintf(_T("#%I64lld ms: Queued: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
pInfo->item.iItem);
}
((BackgroundWorker *)
GetWindowLongPtr(hWnd, GWLP_USERDATA))
->add(callback);
}
}
break;
}
}
break;
}
case WM_CLOSE:
{
PostQuitMessage(0);
break;
}
}
return DefWindowProc(hWnd, uMsg, wParam, lParam);
}
DWORD WINAPI BackgroundWorkerThread(LPVOID lpParameter)
{
HWND const hWnd = (HWND)lpParameter;
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0 && msg.message != WM_QUIT)
{
if (msg.message == WM_NULL)
{
PostMessage(hWnd, msg.message, msg.wParam, msg.lParam);
}
}
return 0;
}
int _tmain(int argc, LPTSTR argv[])
{
startTick = QPC();
bool const affinity = argc >= 2 && _tcsicmp(argv[1], _T("affinity")) == 0;
if (affinity)
{ SetProcessAffinityMask(GetCurrentProcess(), 1 << 0); }
bool const log = logging; // disable temporarily
logging = false;
WNDCLASS wndClass =
{
0, &MyWindowProc, 0, 0, NULL, NULL, LoadCursor(NULL, IDC_ARROW),
GetSysColorBrush(COLOR_3DFACE), NULL, TEXT("MyClass")
};
HWND const hWnd = CreateWindow(
MAKEINTATOM(RegisterClass(&wndClass)),
affinity ? TEXT("Window (1 CPU)") : TEXT("Window (All CPUs)"),
WS_OVERLAPPEDWINDOW | WS_VISIBLE, CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, NULL, NULL);
BackgroundWorker iconLoader;
DWORD tid = 0;
if (useWindowMessaging)
{
CreateThread(NULL, 0, &BackgroundWorkerThread, (LPVOID)hWnd, 0, &tid);
SetWindowLongPtr(hWnd, GWLP_USERDATA, tid);
}
else { SetWindowLongPtr(hWnd, GWLP_USERDATA, (LONG_PTR)&iconLoader); }
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0)
{
if (!IsDialogMessage(hWnd, &msg))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
if (msg.message == WM_TIMER ||
!PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE))
{ logging = log; }
}
PostThreadMessage(tid, WM_QUIT, 0, 0);
return 0;
}
Based on the inter-thread timings you posted at http://ideone.com/fa2fM, it looks like there is a fairness issue at play here. Based solely on this assumption, here is my reasoning as to the apparent cause of the perceived lag and a potential solution to the problem.
It looks like there is a large number of LVN_GETDISPINFO messages being generated and processed on one thread by the window proc, and while the background worker thread is able to keep up and post messages back to the window at the same rate, the WM_NULL messages it posts are so far back in the queue that it takes time before they get handled.
When you set the processor affinity mask, you introduce more fairness into the system because the same processor must service both threads, which will limit the rate at which LVN_GETDISPINFO messages are generated relative to the non-affinity case. This means that the window proc message queue is likely not as deep when you post your WM_NULL messages, which in turn means that they will be processed 'sooner'.
It seems that you need to somehow bypass the queueing effect. Using SendMessage, SendMessageCallback or SendNotifyMessage instead of PostMessage may be ways to do this. In the SendMessage case, your worker thread will block until the window proc thread is finished its current message and processes the sent WM_NULL message, but you will be able to inject your WM_NULL messages more evenly into the message processing flow. See this page for an explanation of queued vs. non-queued message handling.
If you choose to use SendMessage, but you don't want to limit the rate at which you can obtain icons due to the blocking nature of SendMessage, then you can use a third thread. Your I/O thread would post messages to the third thread, while the third thread uses SendMessage to inject icon updates into the UI thread. In this fashion, you have control of the queue of satisfied icon requests, instead of interleaving them into the window proc message queue.
As for the difference in behaviour between Win7 and WinXP, there may be a number of reasons why you don't appear to see this effect on Win7. It could be that the list view common control is implemented differently and limits the rate at which LVN_GETDISPINFO messages are generated. Or perhaps the thread scheduling mechanism in Win7 switches thread contexts more frequently or more fairly.
EDIT:
Based on your latest change, try the following:
...
struct Callback
{
HWND hWnd;
int iItem;
void operator()()
{
if (logging)
{
_tprintf(_T("#%I64lld ms: Sent: #%d\n"),
(QPC() - startTick) * 1000 / QPF(),
iItem);
}
SendNotifyMessage(hWnd, WM_NULL, 0, iItem); // <----
}
};
...
DWORD WINAPI BackgroundWorkerThread(LPVOID lpParameter)
{
HWND const hWnd = (HWND)lpParameter;
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0 && msg.message != WM_QUIT)
{
if (msg.message == WM_NULL)
{
SendNotifyMessage(hWnd, msg.message, msg.wParam, msg.lParam); // <----
}
}
return 0;
}
EDIT 2:
After establishing that the LVN_GETDISPINFO message are being placed in the queue using SendMessage instead of PostMessage, we can't use SendMessage ourselves to bypass them.
Still proceeding on the assumption that there is a glut of messages being processed by the wndproc before the icon results are being sent back from the worker thread, we need another way to get those updates handled as soon as they are ready.
Here's the idea:
Worker thread places results in a synchronized queue-like data structure, and then posts (using PostMessage) a WM_NULL message to the wndproc (to ensure that the wndproc gets executed sometime in the future).
At the top of the wndproc (before the case statements), the UI thread checks the synchronized queue-like data structure to see if there are any results, and if so, removes one or more results from the queue-like data structure and processes them.
The issue has less to do with thread affinity and more to do with telling the listview that it needs to update the list item every time you update it. Because you do not add the LVIF_DI_SETITEM flag to pInfo->item.mask in your LVN_GETDISPINFO handler, and because you call ListView_Update manually, when you call ListView_Update, the list view invalidates any item that still has its iImage set to I_IMAGECALLBACK.
You can fix this in one of two ways (or a combination of both):
Remove ListView_Update from your WM_NULL handler. The list view will automatically redraw the items you set the image for in your WM_NULL handler when you set them, and it will not attempt to redraw items you haven't set the image for more than once.
Set LVIF_DI_SETITEM flag in pInfo->item.mask in your LVN_GETDISPINFO handler and set pInfo->item.iImage to a value that is not I_IMAGECALLBACK.
I repro'd similar awful behavior doing a full page scroll on Vista. Doing either of the above fixed the issue while still updating the icons asynchronously.
Its plausible to suggest that this is related to XPs hyper threading/logical core scheduling and I will second IvoTops suggestion to try this with hyper-threading disabled. Please try this and let us know.
Why? Because:
a) Logical cores offer bad parallelism for CPU bound tasks. Running multiple CPU bound threads on two logical HT cores on the same physical core is detrimental to performance. See for example, this intel paper - it explains how enabling HT might cause typical server threads to incur an increase in latency or processing time for each request (while improving net throughput.)
b) Windows 7 does indeed have some HT/SMT (symmetrical multi threading) scheduling improvements. Mark Russinovich's slides here mention this briefly. Although they claim that XP scheduler is SMT aware, the fact that Windows 7 explicitly fixes something around this, implies there could be something lacking in XP. So I'm guessing that the OS isn't setting the thread affinity to the second core appropriately. (perhaps because the second core might not be idle at the instant of scheduling your second thread, to speculate wildly).
You wrote "I just tried setting the CPU affinity of the process (or even the individual threads) to all potential combinations I could think of, on the same and on different logical CPUs".
Can we try to verify that the execution is actually on the second core, once you set this?
You can visually check this in task manager or perfmon/perf counters
Maybe post the code where you set the affinity of the threads (I note that you are not checking the return value on SetProcessorAffinity, do check that as well.)
If Windows perf counters dont help, Intel's VTune Performance Analyzer is helpful for exactly this kind of stuff.
I think you can force the thread affinity manually using task manager.
One more thing: Your core i5 is either Nehalem or SandyBridge micro-architecture. Nehalem and later HT implementation is significantly different from the prior generation architectures (Core,etc). In fact Microsoft recommended disabling HT for running Biztalk server on pre-Nehalem systems. So perhaps Windows XP does not handle the new HT architecture well.
This might be a hyperthreading bug. To check if that's what causing it run your faulty program with Hyperthreading turned off (in the bios you can usually switch it off). I have run into two issues in the last five years that only surfaced when hyperthreading was enabled.

Resources