xSemaphoreGive() gets stuck when used by different threads - multithreading

I am working on STM32F756ZG with FreeRTOS. I have one network thread that is made using osThreadDef() which is a part of the CMSIS-RTOS API. I also have other tasks running that are created using xTaskCreate() which is a part of the freeRTOS API.
I have a Semaphore that is shared by a tempSensor and EEPROM. In the network thread, the I try to get the values for IP address from the EEPROM using I2C protocol. It successfully takes the Semaphore using xSemaphoreTake() but when its time to give up the Semaphore using xSemaphoreGive() it gets lost and when I hit pause it stays in I2C_WaitOnFlagUntilTimeout().As a result it never loads the webpage.
The other tasks run fine and the temp sensor that also uses I2c and sempahore returns the values correctly.
So my question is if this problem is created because of using semaphore between two threads each generated by different OS API. I am really struggling with this and any help would be really appreciated. Thanks a lot!
I am adding a little code snippet here.
/* Init networking thread */
osThreadDef(Start, StartNetworkThread, osPriorityNormal, 0, configMINIMAL_STACK_SIZE * 2);
osThreadCreate (osThread(Start), NULL);
start_threads(3);
HAL_ADC_Start_DMA(&hadc1, (uint32_t*) adc1vals, 1);
HAL_ADC_Start_DMA(&hadc2, (uint32_t*) adc2vals, 1);
xTaskCreate (vADC1, "vADC1", configMINIMAL_STACK_SIZE, NULL, uxPriority + 3, ( TaskHandle_t * ) NULL );
xTaskCreate (vADC2, "vADC2", configMINIMAL_STACK_SIZE, NULL, uxPriority + 3, ( TaskHandle_t * ) NULL );
xTaskCreate (vPIDloop, "vPIDloop", configMINIMAL_STACK_SIZE + 100, NULL, uxPriority + 2, ( TaskHandle_t * ) NULL );
xTaskCreate (vIO, "vIO", configMINIMAL_STACK_SIZE + 512, NULL, uxPriority + 1, ( TaskHandle_t * ) NULL ); //Run IO at least important priority
xTaskCreate (vControl, "vControl", configMINIMAL_STACK_SIZE + 512, NULL, uxPriority + 1, ( TaskHandle_t * ) NULL ); //Run control at least important priority
This is how my Semaphore is initialized:
// Initialize the semaphore that controls eeprom access
xI2C3Semaphore = xSemaphoreCreateMutex();
if( xI2C3Semaphore ==NULL)
{
while(1);
}
Following is the code for when I am reading the EEPROM:
int result = 0;
NvVarsEeprom_t eepromVar;
memset( &eepromVar, 0xff, sizeof(eepromVar) );
if( xI2C3Semaphore != NULL )
{
// Wait forever for semaphore
if( xSemaphoreTake( xI2C3Semaphore, (TickType_t)10 ) == pdTRUE )
{
// count = uxSemaphoreGetCount(xI2C3Semaphore);
// Read from EEPROM
if( nvdata_read((char *)&eepromVar, sizeof(eepromVar), addr) != HAL_OK )
{
//vTaskDelay(5);
if( nvdata_read((char *)&eepromVar, sizeof(eepromVar), addr) != HAL_OK )
{
return ERR_EEPROM;
}
}
//count = uxSemaphoreGetCount(xI2C3Semaphore);
// Give up the semaphore
if(xSemaphoreGive( xI2C3Semaphore ) != pdTRUE)
{
while(1);
}
// count = uxSemaphoreGetCount(xI2C3Semaphore);
}
}
if( result == 0 )
{
eepromVar.valid = NVP_VALID;
}
if( eepromVar.valid == NVP_VALID )
{
strncpy( buf, eepromVar.str, EepromVarSize-1 );
buf[EepromVarSize-1] = '\0';
}
else
{
return ERR_EEPROM;
}
return result;
The next code snippet is when I am reading from the temp sensor:
int tempC = 0;
if( xI2C3Semaphore != NULL )
{
// Wait forever for semaphore
if( xSemaphoreTake( xI2C3Semaphore, (TickType_t)10 ) == pdTRUE )
{
// Read from I2C3
tempC = heatSink_read();
// Give up the semaphore
if(xSemaphoreGive( xI2C3Semaphore ) != pdTRUE)
{
while(1);
}
}
}
return tempC;
When I jump from the bootloader to the application and try to read values from the EEPROM, I can take the Semaphore but I does not give it back using xSemaphoreGive().

First of all, make sure semaphore got initialized properly, like this:
if ((SemId_I2C1_Rx = xSemaphoreCreateBinary()) == NULL) { goto InitFailed; };
Secondly, make sure you are using proper function for giving the semaphore.
If it is given from an Interrupt, you have to use
xSemaphoreGiveFromISR(SemId_I2C1_Rx, NULL);

Related

How to recreate swapchain after vkAcquireNextImageKHR is VK_SUBOPTIMAL_KHR?

This vulkan tutorial discusses swapchain recreation:
You could also decide to [recreate the swapchain] that if the swap chain is suboptimal, but I've chosen to proceed anyway in that case because we've already acquired an image.
My question is: how would one recreate the swapchain and not proceed in this case of VK_SUBOPTIMAL_KHR?
To see what I mean, let's look at the tutorial's render function:
void drawFrame() {
vkWaitForFences(device, 1, &inFlightFences[currentFrame], VK_TRUE, UINT64_MAX);
uint32_t imageIndex;
VkResult result = vkAcquireNextImageKHR(device, swapChain, UINT64_MAX, imageAvailableSemaphores[currentFrame], VK_NULL_HANDLE, &imageIndex);
if (result == VK_ERROR_OUT_OF_DATE_KHR) {
recreateSwapChain();
return;
/* else if (result == VK_SUBOPTIMAL_KHR) { createSwapchain(); ??? } */
} else if (result != VK_SUCCESS && result != VK_SUBOPTIMAL_KHR) {
throw std::runtime_error("failed to acquire swap chain image!");
}
if (imagesInFlight[imageIndex] != VK_NULL_HANDLE) {
vkWaitForFences(device, 1, &imagesInFlight[imageIndex], VK_TRUE, UINT64_MAX);
}
imagesInFlight[imageIndex] = inFlightFences[currentFrame];
VkSubmitInfo submitInfo{};
submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
VkSemaphore waitSemaphores[] = {imageAvailableSemaphores[currentFrame]};
VkPipelineStageFlags waitStages[] = {VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT};
submitInfo.waitSemaphoreCount = 1;
submitInfo.pWaitSemaphores = waitSemaphores;
submitInfo.pWaitDstStageMask = waitStages;
submitInfo.commandBufferCount = 1;
submitInfo.pCommandBuffers = &commandBuffers[imageIndex];
VkSemaphore signalSemaphores[] = {renderFinishedSemaphores[currentFrame]};
submitInfo.signalSemaphoreCount = 1;
submitInfo.pSignalSemaphores = signalSemaphores;
vkResetFences(device, 1, &inFlightFences[currentFrame]);
if (vkQueueSubmit(graphicsQueue, 1, &submitInfo, inFlightFences[currentFrame]) != VK_SUCCESS) {
throw std::runtime_error("failed to submit draw command buffer!");
}
VkPresentInfoKHR presentInfo{};
presentInfo.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
presentInfo.waitSemaphoreCount = 1;
presentInfo.pWaitSemaphores = signalSemaphores;
VkSwapchainKHR swapChains[] = {swapChain};
presentInfo.swapchainCount = 1;
presentInfo.pSwapchains = swapChains;
presentInfo.pImageIndices = &imageIndex;
result = vkQueuePresentKHR(presentQueue, &presentInfo);
if (result == VK_ERROR_OUT_OF_DATE_KHR || result == VK_SUBOPTIMAL_KHR || framebufferResized) {
framebufferResized = false;
recreateSwapChain();
} else if (result != VK_SUCCESS) {
throw std::runtime_error("failed to present swap chain image!");
}
currentFrame = (currentFrame + 1) % MAX_FRAMES_IN_FLIGHT;
}
The trouble is as follows:
vkAcquireImageKHR succeeds, signaling the semaphore and returning a valid, suboptimal image
Recreate the swapchain
We can't present the image from 1 with the swapchain from 2 due to VUID-VkPresentInfoKHR-pImageIndices-01430. We need to call vkAcquireImageKHR again to get a new image.
When we call vkAcquireImageKHR again, the semaphore is in the signaled state which is not allowed (VUID-vkAcquireNextImageKHR-semaphore-01286), we need to 'unsignal' it.
Is the best solution here to destroy and recreate the semaphore?
Ad 3: you can use the old images (and swapchain) if you properly use the oldSwapchain parameter when creating the new swapchain. Which is what I assume the tutorial suggests.
Anyway. What I do is that I paranoidly sanitize that toxic semaphore like this:
// cleanup dangerous semaphore with signal pending from vkAcquireNextImageKHR (tie it to a specific queue)
// https://github.com/KhronosGroup/Vulkan-Docs/issues/1059
void cleanupUnsafeSemaphore( VkQueue queue, VkSemaphore semaphore ){
const VkPipelineStageFlags psw = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
VkSubmitInfo submit_info = {};
submit_info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
submit_info.waitSemaphoreCount = 1;
submit_info.pWaitSemaphores = &semaphore;
submit_info.pWaitDstStageMask;
vkQueueSubmit( queue, 1, &submit_info, VK_NULL_HANDLE );
}
After that the semaphore can be properly catched with a fence or vkQueueWaitIdle, and then destroyed or reused.
I just destroy them, because the new semaphore count might differ, and I don't really consider swapchain recreation a hotspot (and also I just use vkDeviceWaitIdle in such case).

How to create text file(log file) having all rights(Read,Write) to all windows users (Everybody) using sprintf in VC++

I am creating Smart.log file using following code :
void S_SetLogFileName()
{
char HomeDir[MAX_PATH];
if (strlen(LogFileName) == 0)
{
TCHAR AppDataFolderPath[MAX_PATH];
if (SUCCEEDED(SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA, NULL, 0, AppDataFolderPath)))
{
sprintf(AppDataFolderPath, "%s\\Netcom\\Logs", AppDataFolderPath);
if (CreateDirectory(AppDataFolderPath, NULL) || ERROR_ALREADY_EXISTS == GetLastError())
sprintf(LogFileName,"%s\\Smart.log",AppDataFolderPath);
else
goto DEFAULTVALUE;
}
else
{
DEFAULTVALUE:
if (S_GetHomeDir(HomeDir,sizeof(HomeDir)))
sprintf(LogFileName,"%s\\Bin\\Smart.log",HomeDir);
else
strcpy(LogFileName,"Smart.log");
}
}
}
and opening and modifying it as follows:
void LogMe(char *FileName,char *s, BOOL PrintTimeStamp)
{
FILE *stream;
char buff[2048] = "";
char date[256];
char time[256];
SYSTEMTIME SystemTime;
if(PrintTimeStamp)
{
GetLocalTime(&SystemTime);
GetDateFormat(LOCALE_USER_DEFAULT,0,&SystemTime,"MM':'dd':'yyyy",date,sizeof(date));
GetTimeFormat(LOCALE_USER_DEFAULT,0,&SystemTime,"HH':'mm':'ss",time,sizeof(time));
sprintf(buff,"[%d - %s %s]", GetCurrentThreadId(),date,time);
}
stream = fopen( FileName, "a" );
fprintf( stream, "%s %s\n", buff, s );
fclose( stream );
}
Here's the problem:
UserA runs the program first, it creates \ProgramData\Netcom\Smart.log using S_SetLogFileName()
UserB runs the program next, it tries to append/ modify to Smart.log and gets access denied.
What should i need to change in my code to allow all users to access Smart.log file ?
This is solution, I am looking for, Hope useful for some one.
refer from :
void SetFilePermission(LPCTSTR FileName)
{
PSID pEveryoneSID = NULL;
PACL pACL = NULL;
EXPLICIT_ACCESS ea[1];
SID_IDENTIFIER_AUTHORITY SIDAuthWorld = SECURITY_WORLD_SID_AUTHORITY;
// Create a well-known SID for the Everyone group.
AllocateAndInitializeSid(&SIDAuthWorld, 1,
SECURITY_WORLD_RID,
0, 0, 0, 0, 0, 0, 0,
&pEveryoneSID);
// Initialize an EXPLICIT_ACCESS structure for an ACE.
ZeroMemory(&ea, 1 * sizeof(EXPLICIT_ACCESS));
ea[0].grfAccessPermissions = 0xFFFFFFFF;
ea[0].grfAccessMode = GRANT_ACCESS;
ea[0].grfInheritance = NO_INHERITANCE;
ea[0].Trustee.TrusteeForm = TRUSTEE_IS_SID;
ea[0].Trustee.TrusteeType = TRUSTEE_IS_WELL_KNOWN_GROUP;
ea[0].Trustee.ptstrName = (LPTSTR)pEveryoneSID;
// Create a new ACL that contains the new ACEs.
SetEntriesInAcl(1, ea, NULL, &pACL);
// Initialize a security descriptor.
PSECURITY_DESCRIPTOR pSD = (PSECURITY_DESCRIPTOR)LocalAlloc(LPTR,
SECURITY_DESCRIPTOR_MIN_LENGTH);
InitializeSecurityDescriptor(pSD, SECURITY_DESCRIPTOR_REVISION);
// Add the ACL to the security descriptor.
SetSecurityDescriptorDacl(pSD,
TRUE, // bDaclPresent flag
pACL,
FALSE); // not a default DACL
//Change the security attributes
SetFileSecurity(FileName, DACL_SECURITY_INFORMATION, pSD);
if (pEveryoneSID)
FreeSid(pEveryoneSID);
if (pACL)
LocalFree(pACL);
if (pSD)
LocalFree(pSD);
}

opencv capture image from webcam without post processing

I want to capture images from webcam without any post processing, that is NO auto focus , exposure correction , white balance and stuff. Well basically I want to capture continuous frames from webcam and make each frame compare with the previous one and save them to disk only when there is an actual change. Because of the post processing almost every frame is being returned as different for me.
code so far
using namespace cv;
bool identical(cv::Mat m1, cv::Mat m2)
{
if ( m1.cols != m2.cols || m1.rows != m2.rows || m1.channels() != m2.channels() || m1.type() != m2.type() )
{
return false;
}
for ( int i = 0; i < m1.rows; i++ )
{
for ( int j = 0; j < m1.cols; j++ )
{
if ( m1.at<Vec3b>(i, j) != m2.at<Vec3b>(i, j) )
{
return false;
}
}
}
return true;
}
int main() {
CvCapture* capture = cvCaptureFromCAM( 1);
int i=0,firsttime=0;
char filename[40];
Mat img1,img2;
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
cvNamedWindow( "img1", CV_WINDOW_AUTOSIZE );
cvNamedWindow( "img2", CV_WINDOW_AUTOSIZE );
while ( 1 ) {
IplImage* frame = cvQueryFrame( capture );
img1=frame;
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
if(firsttime==0){
img2=frame;
fprintf( stderr, "firtstime\n" );
}
if ( (cvWaitKey(10) & 255) == 27 ) break;
i++;
sprintf(filename, "D:\\testimg\\img%d.jpg", i);
cv::cvtColor(img1, img1, CV_BGR2GRAY);
imshow( "img1", img1);
imshow( "img2", img2);
imwrite(filename,img1);
if(identical(img1,img2))
{
//write to diff path
}
img2=imread(filename,1);
firsttime=1;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
return 0;
}
While ur at it, I'll be great full if u can suggest a workaround for this using another frame compare solution aswell :)
I had this problem, and the only solution that found and wrote was a program based on Direct-show (in case you're using windows )so no opencv code at all
with a bit of luck, you can get the properties page of your camera, and switch things off there:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_SETTINGS,1);
and please, skip the c-api in favour of c++. it'll go away soon.
forgot to mention : you an change the cam-settings from vlc as well.
#Prince, sorry I have been looking for my Directshow code I didn't found it, and I don't think it will help because I used it for the DirectLink (Black magic Design)card, since I've never did that befor it was pretty hard, my suggestion will be try to use GraphEditPlus :
http://www.infognition.com/GraphEditPlus/
it helps a lot, and it's easy to use !
good luck !
If you just wish to capture frames when there is an actual change, try background subtraction algorithms. Also, instead of just subtracting subsequent frames, use one of the many algorithms already implemented for you in OpenCV - they are much more robust to changes in lightning conditions etc than vanilla background subtraction.
In Python :
backsub = cv2.BackgroundSubtractorMOG2(history=10000,varThreshold=100)
fgmask = backsub.apply(frame, None, 0.01)
Frame is the stream of pictures read from your webcam.
Google for the corresponding function in Cpp.

TcpListener.AcceptSocket( ) behavior: gets stuck in one app upon termination, but does not in another?

I have two TCP-server apps that are based on the same code, but for some reason exhibit different behavior and i'm ready to pull my hair out trying to figure out why. The code pattern is as follows:
public class TcpServer
{
public static void Start( bool bService )
{
..
oTcpListnr= new TcpListener( ip, iOutPort );
aTcpClient= new ArrayList( );
bListen= true;
oTcpListnr.Start( );
thOutComm= new Thread( new ThreadStart( AcceptTcpConn ) );
thOutComm.Name= "App-i.AcceptTcpConn";
thOutComm.Start( );
..
}
public static void Stop( )
{
bListen= false;
if( thOutComm != null )
{
thOutComm.Join( iTimeout );
thOutComm= null;
}
if( oTimer != null )
{
oTimer.Change( Timeout.Infinite, Timeout.Infinite );
oTimer.Dispose( );
}
}
public static void AcceptTcpConn( )
{
TcpState oState;
Socket oSocket= null;
while( bListen )
{
try
{
// if( oTcpListnr.Pending( ) )
{
oSocket= oTcpListnr.AcceptSocket( );
oState= new TcpState( oSocket );
if( oSocket.Connected )
{
Utils.PrnLine( "adding tcp: {0}", oSocket.RemoteEndPoint.ToString( ) );
Monitor.Enter( aTcpClient );
aTcpClient.Add( oState );
Monitor.Exit( aTcpClient );
oSocket.SetSocketOption( SocketOptionLevel.IP, SocketOptionName.DontFragment, true );
oSocket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.DontLinger, true );
// / oSocket.BeginReceive( oState.bData, 0, oState.bData.Length, SocketFlags.None, // no need to read
// / new AsyncCallback( AsyncTcpComm ), oState ); // for output only
}
else
{
Utils.PrnLine( "removing tcp: {0}", oSocket.RemoteEndPoint.ToString( ) );
Monitor.Enter( aTcpClient );
aTcpClient.Remove( oState );
Monitor.Exit( aTcpClient );
}
}
// Thread.Sleep( iTcpWake );
}
#region catch
catch( Exception x )
{
bool b= true;
SocketException se= x as SocketException;
if( se != null )
{
if( se.SocketErrorCode == SocketError.Interrupted )
{
b= false;
if( oSocket != null )
Utils.PrnLine( "TcpConn:\tclosing tcp: {0} ({1})", oSocket.RemoteEndPoint.ToString( ), se.SocketErrorCode );
}
}
if( b )
{
Utils.HandleEx( x );
}
}
#endregion
}
}
}
I omitted exception handling in Start/Stop methods for brevity. Variation in behavior is during program termination: one app shuts down almost immediately while the other gets stuck in oTcpListnr.AcceptSocket( ) call. I know that this is a blocking call, but in that case why does it not present an issue for the 1st app?
Usage of this class cannot be any simpler, e.g. for a command-line tool:
class Program
{
public static void Main( string[] args )
{
TcpServer.Start( false );
Console.Read( );
Console.WriteLine( "\r\nStopping.." );
TcpServer.Stop( );
Console.WriteLine( "\r\nStopped. Press any key to exit.." );
Console.Read( );
}
}
Whether any clients have connected or not does not make a difference, 2nd app always gets stuck.
I found a potential solution (commented lines) by checking TcpListener.Pending( ) prior to .AcceptSocket( ) call, but this immediately affects CPU utilization, therefore an inclusion of smth like Thread.Sleep(.) is a must. Altogether though I'd rather avoid this approach if possible, because of extra connection wait times and CPU utilization (small as it is).
Still, the main question is: what may cause the same exact code to execute differently? Both apps are compiled on .NET 4 Client Profile, x86 (32-bit), no specific optimizations. Thank you in advance for good ideas!
Finally found the root cause: I missed a couple of important lines [hidden in a #region] in the Stop( ) method, which starts the ball rolling. Here's how it should look:
public static void Stop( )
{
bListen= false;
if( thOutComm != null )
{
try
{
oTcpListnr.Stop( );
}
catch( Exception x )
{
Utils.HandleEx( x );
}
thOutComm.Join( iTimeout );
thOutComm= null;
}
if( oTimer != null )
{
oTimer.Change( Timeout.Infinite, Timeout.Infinite );
oTimer.Dispose( );
}
}
The call to TcpListener.Stop( ) kicks out the wait-cycle inside .AcceptSocket( ) with "A blocking operation was interrupted by a call to WSACancelBlockingCall" exception, which is then "normally ignored" (check for SocketError.Interrupted) by the code that i originally had.

Visual C++ AVI writer function to push bitmaps (640x480) to AVI file?

I have a video capture card with SDK for Visual C++. Color frames (640 x 480) become available to me at 30 fps in a callback from the SDK. Currently, I am writing the entire image sequence out one at a time as individual bmp files in a separate thread -- that's 108,000 files in an hour, or about 100 GB per hour, which is not manageable. I would like to push these incoming frames to one AVI file instead, with optional compression. Where do I even start? Wading through the MSDN DirectShow documentation has confused me so far. Are there better examples out there? Is OpenCV the answer? I've looked at some examples, but I'm not sure OpenCV would even recognize the card as a capture device, nor do I understand how it even recognizes capture devices in the first place. Also, I'm already getting the frames in, I just need to put them out to AVI in some consumer thread that does not back up my producer thread. Thanks for any help.
I've used CAviFile before. It works pretty well, I had to tweak it a bit to allow the user to pick the codec. I took that code from CAviGenerator. The interface for CAviFile is very simple, here's some sample code:
CAviFile *Avi = new CAviFile(fileName.c_str(), 0, 10);
HRESULT res = Avi->AppendNewFrame(Width, Height, ImageBuffer, BitsPerPixel);
if (FAILED(res))
{
std::cout << "Error recording AVI: " << Avi->GetLastErrorMessage() << std::endl;
}
delete Avi;
Obviously you have to ensure your ImageBuffer contains data in the right format etc. But once I got that kind of stuff all sorted out it worked great.
You can either use Video for Windows or DirectShow. Each comes with its own set of codecs. (and can be extended)
Though Microsoft considers VfW deprecated it is still perfectly usable, and is easier to setup than DirectShow.
Well you need to attach an AVI Mux (CLSID_AviDest) to your capture card. You then need to attach a File Writer (CLSID_FileWriter) and it will write out everything for you.
Admittedly Setting up the capture graph is not necessarily easy as DirectShow makes you jump through a million and one hoops.
Its much easier using the ICaptureGraphBuilder2 interface. Thankfully Microsoft have given a really nice rundown of how to do this ...
http://msdn.microsoft.com/en-us/library/dd318627.aspx
Adding an encoder is not easy though and, conveniently, glossed over in that link.
Here is an example of how to enumerate all the video compressors in a system that I wrote for an MFC app of mine.
BOOL LiveInputDlg::EnumerateVideoCompression()
{
CComboBox* pVideoCompression = (CComboBox*)GetDlgItem( IDC_COMBO_VIDEOCOMPRESSION );
pVideoCompression->SetExtendedUI( TRUE );
pVideoCompression->SetCurSel( pVideoCompression->AddString( _T( "<None>" ) ) );
ICreateDevEnum* pDevEnum = NULL;
IEnumMoniker* pEnum = NULL;
HRESULT hr = S_OK;
hr = CoCreateInstance( CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void**)&pDevEnum );
if ( FAILED( hr ) )
{
return FALSE;
}
hr = pDevEnum->CreateClassEnumerator( CLSID_VideoCompressorCategory, &pEnum, 0 );
pDevEnum->Release();
if ( FAILED( hr ) )
{
return FALSE;
}
if ( pEnum )
{
IMoniker* pMoniker = NULL;
hr = pEnum->Next( 1, &pMoniker, NULL );
while( hr == S_OK )
{
IPropertyBag* pPropertyBag = NULL;
hr = pMoniker->BindToStorage( NULL, NULL, IID_IPropertyBag, (void**)&pPropertyBag );
if ( FAILED( hr ) )
{
pMoniker->Release();
pEnum->Release();
return FALSE;
}
VARIANT varName;
VariantInit( &varName );
hr = pPropertyBag->Read( L"Description", &varName, NULL );
if ( FAILED( hr ) )
{
hr = pPropertyBag->Read( L"FriendlyName", &varName, NULL );
if ( FAILED( hr ) )
{
pPropertyBag->Release();
pMoniker->Release();
pEnum->Release();
return FALSE;
}
}
IBaseFilter* pBaseFilter = NULL;
pMoniker->BindToObject( NULL, NULL, IID_IBaseFilter, (void**)&pBaseFilter );
{
USES_CONVERSION;
TCHAR* pName = OLE2T( varName.bstrVal );
int index = pVideoCompression->AddString( pName );
pVideoCompression->SetItemDataPtr( index, pMoniker );
VariantClear( &varName );
pPropertyBag->Release();
}
hr = pEnum->Next( 1, &pMoniker, NULL );
}
pEnum->Release();
}
return TRUE;
}
Good Luck! :)

Resources