I'd like to call IDXGIDevice1::SetMaximumFrameLatency method from my dx12app, for that I need to get a valid IDXGIDevice1 from the current Direct3D 12 device. querying the interface return a E_NOINTERFACE:
IDXGIDevice * pDXGIDevice;
HRESULT hr = myDevice->QueryInterface(__uuidof(IDXGIDevice), (void **)&pDXGIDevice);
assert(hr != S_OK); // returns E_NOINTERFACE
IDXGIDevice1 * pDXGIDevice1;
HRESULT hr1 = myDevice->QueryInterface(__uuidof(IDXGIDevice1), (void **)&pDXGIDevice1);
assert(hr != S_OK); // returns E_NOINTERFACE
Not sure if I'm missing something or there is sequence of dxgi logic I need to implement to get a valid IDXGIDevice1 interface.
Would appreciate any hints & thanks in advance!
Klip
For Direct3D 12, this 'legacy pattern' of obtaining the DXGI factory is not supported, so your code above won't work as it's the first step:
ComPtr<IDXGIDevice3> dxgiDevice;
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);
ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
);
ComPtr<IDXGIFactory4> dxgiFactory;
DX::ThrowIfFailed(
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory))
);
For Direct3D 12, you should always create the DXGI factory explicitly. See Anatomy of Direct3D 12 Create Device.
In Direct3D 12 swap chains, you explicitly control the backbuffer swapping behavior. Ideally you'd use DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT and then use the waitable object to throttle your rendering speed instead. You can set the latency count via IDXGISwapChain2::SetMaximumFrameLatency which defaults to 3 (MSDN is currently wrong about the defaults).
If you want to support 'higher-than-refresh-rate' updates (such as nVidia G-Sync or AMD FreeSync), then you use the new DXGI_PRESENT_ALLOW_TEARING flag for Present. For details on using this flag, see MSDN or this YouTube video.
See also DirectX 12: Presentation Modes In Windows 10 (YouTube).
Related
Using Win32 I have access to CLSID_CResamplerMediaObject which means I can reduce my channel count from say 6 to 2.
On UWP this is no longer defined and the only reference to a resampler I can find is CLSID_AudioResamplerMediaObject. When I create an instance of this class however, and pass it my MFMediaType_Float or MFMediaType_PCM type, it says that the provided types aren't supported...
ComPtr<IUnknown> pTransformUnknown = nullptr;
DxUtil::ThrowIfFailed(CoCreateInstance(CLSID_AudioResamplerMediaObject, NULL, CLSCTX_INPROC_SERVER, IID_IUnknown, &pTransformUnknown));
DxUtil::ThrowIfFailed(pTransformUnknown->QueryInterface(IID_PPV_ARGS(&mResampler)));
IMFMediaType* pInputType = nullptr;
DxUtil::ThrowIfFailed(MFCreateMediaType(&pInputType));
DxUtil::ThrowIfFailed(pInputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio));
DxUtil::ThrowIfFailed(pInputType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_PCM)); // Or MFAudioFormat_Float
DxUtil::ThrowIfFailed(pInputType->SetUINT32(MF_MT_AUDIO_BITS_PER_SAMPLE, 16));
DxUtil::ThrowIfFailed(pInputType->SetUINT32(MF_MT_AUDIO_NUM_CHANNELS, 6));
DxUtil::ThrowIfFailed(pInputType->SetUINT32(MF_MT_AUDIO_SAMPLES_PER_SECOND, 48000));
DxUtil::ThrowIfFailed(mAudioSourceReader->SetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, nullptr, pInputType)); // Fails here!
As usual, Microsoft docs aren't helpful, I would appreciate any help!
Input file is Mp4 with 1 aac stream, 6 channels, 16 bit audio.
CLSID_AudioResamplerMediaObject and CLSID_CResamplerMediaObject are the same thing, same GUID of {f447b69e-1884-4a7e-8055-346f74d6edb3}.
The error you mentioned is not coming from audio resampler, you have it from Source Reader API. The error is presumably correctly reported indicating the situation that for given media source the reader cannot provide a conversion to supplied media type. There can be multiple reasons for this to happen.
I am just about ready to release my first little game with my game engine. However, through having some people test it, we found that the call to acquire the pointer to the AudioEngine interface fails for one of my testers.
The call works fine for me on both my desktop and my laptop, and it works fine on tester 2's computer. However, tester 1's computer will not succeed on the call.
By "fails" I mean it throws an exception which I am handling with a try/catch block. The "what" of the exception just tells me "AudioEngine" so no help there. He has a heavily customized computer which utilizes two graphics cards which are not linked and handle separate tasks. He uses the same Virtual Audio Cable set up that I have on my Desktop (used to separate voice sources for easier video editing re: streaming/game recording).
If anyone has any clue what might cause this call to fail, we would greatly appreciate the information. Please let me know if you require any additional information. Code for initialization is below:
//aud engine declaration is in the header for the class
unique_ptr<AudioEngine> audEngine;
//function being called
bool AudioEngineClass::InitializeAudioEngine()
{
//Call this to create the DXTK Audio Engine
//Setup flags:
AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default;
eflags = eflags | AudioEngine_EnvironmentalReverb | //Enables environmental reverb for 3D (required for 3D audio)
AudioEngine_ReverbUseFilters | //Enables additional features for 3D positional audio reverb
AudioEngine_UseMasteringLimiter; //Enables a mastering volume limiter to avoid distortion and clipping with 3D audio.
//MessageBox(NULL, "Attempting to assign AudioEngine Pointer", "AudioEngine.InitializeAudioEngine", MB_OK);
try
{
audEngine = make_unique<AudioEngine>(eflags);
}
catch(exception& e)
{
//Tester 1 falls into this
string exceptionStr = e.what();
string outputStr = "Failed to Initialize Audio Engine. Exception: \n";
outputStr += exceptionStr;
MessageBox(NULL, outputStr.c_str(), "AudioEngine.InitializeAudioEngine", MB_OK);
}
//MessageBox(NULL, "Got past AudioEngine Pointer Assignment", "AudioEngine.InitializeAudioEngine", MB_OK);
if (!audEngine)
{
//failed to create audio engine.
initialized = false;
}
else
{
initialized = true;
}
return initialized;
}
UPDATE: Been trying stuff all day with no luck so far.
-Had Tester download the June 2010 DirectX DLLs and install them and restart their computer.
-Had them update all of their drivers.
-Had them check their System folder (all XAudio2_#.dll files are present).
-They have Windows 10
I've been working on a c++ interface to capture images from all types of webcams via the Micrsoft Media Foundation. I've already got a bit of code that can connect with several types of webcams and is able to capture images in different resolutions and formats.
I know that under WinXP it is possible to change different parameters of the webcam (like white balance, exposure time e.g.) by using the Direct Show library. Unfortunately the interface in the Direct Show library that made it possible to easily capture single frames from a webcam is removed from Direct Show under Win7. Does anybody know how I can acces these parameters using Microsoft Media Foundation or any other library that I can combine with the Microsoft Media Foundation?
It is possible to call a DirectShow QueryInterface method from WMF. Example code is given at Windows Media Foundation: Controlling Camera Properties.
This should let you set available camera parameters like focus and white balance etc.
HRESULT CMFVideoCaptureDlg::SetupCamera(IMFMediaSource* pCameraSource) {
CComQIPtr<IAMCameraControl> spCameraControl(pCameraSource);
HRESULT hr = S_OK;
if(spCameraControl) {
long min, max, step, def, control;
hr = spCameraControl->GetRange(CameraControl_Exposure, &min, &max, &step, &def, &control);
if(SUCCEEDED(hr))
hr = spCameraControl->Set(CameraControl_Exposure, 1, CameraControl_Flags_Manual);
}
CComQIPtr<IAMVideoProcAmp> spVideo(pCameraSource);
if(spVideo)
hr = spVideo->Set(VideoProcAmp_WhiteBalance, 0, VideoProcAmp_Flags_Auto);
return hr;
}
It turns out Media Foundation does not define any specific interfaces
for these tasks. Curiously enough, it implements interfaces defined by
its predecessor, DirectShow, on its media source (represented by the
IMFMediaSource interface), when that media source is a video camera
DirectShow is still good in Windows 7 (the easiest to check is using GraphEdit and AMCap from Windows SDK). Media Foundation however lacks essential support in earlier versions of Windows.
This article has the following code and it works like a charm!
HRESULT CMFVideoCaptureDlg::SetupCamera(IMFMediaSource* pCameraSource) {
CComQIPtr spCameraControl(pCameraSource);
HRESULT hr = S_OK;
if(spCameraControl) {
long min, max, step, def, control;
hr = spCameraControl->GetRange(CameraControl_Exposure, &min, &max, &step, &def, &control);
if(SUCCEEDED(hr))
hr = spCameraControl->Set(CameraControl_Exposure, 1, CameraControl_Flags_Manual);
}
CComQIPtr spVideo(pCameraSource);
if(spVideo)
hr = spVideo->Set(VideoProcAmp_WhiteBalance, 0, VideoProcAmp_Flags_Auto);
return hr;
}
IAMCameraControl and IANVideoProcAmp still support White balance,pan,zoom in Windows 8. camera control is so far not part of MFT.We have to use Direct Show to do these things.
I am targeting windows machines. I need to get access to the pointer to the byte array describing the individual streaming frames from an attached usb webcam. I saw the playcap directshow sample from the windows sdk, but I dont see how to get to raw data, frankly, I don't understand how the video actually gets to the window. Since I don't really need anything other than the video capture I would prefer not to use opencv.
Visual Studio 2008 c++
Insert the sample grabber filter. Connect the camera source to the sample grabber and then to the null renderer. The sample grabber is a transform, so you need to feed the output somewhere, but if you don't need to render it, the null renderer is a good choice.
You can configure the sample grabber using ISampleGrabber. You can arrange a callback to your app for each frame, giving you either a pointer to the bits themselves, or a pointer to the IMediaSample object which will also give you the metadata.
You need to implement ISampleGrabberCB on your object, and then you need something like this (pseudo code)
IFilterInfoPtr m_pFilterInfo;
ISampleGrabberPtr m_pGrabber;
m_pGrabber = pFilter;
m_pGrabber->SetBufferSamples(false);
m_pGrabber->SetOneShot(false);
// force to 24-bit mode
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
m_pGrabber->SetMediaType(&mt);
m_pGrabber->SetCallback(this, 0);
// SetCallback increments a refcount on ourselves,
// but we own the grabber so this is recursive
/// -- must addref before SetCallback(NULL)
Release();
Is there a good library to use for gathering user input in Linux from the mouse/keyboard/joystick that doesn't force you to create a visible window to do so? SDL lets you get user input in a reasonable way, but seems to force you to create a window, which is troublesome if you have abstracted control so the control machine doesn't have to be the same as the render machine. However, if the control and render machines are the same, this results in an ugly little SDL window on top of your display.
Edit To Clarify:
The renderer has an output window, in its normal use case, that window is full screen, except when they are both running on the same computer, just so it is possible to give the controller focus. There can actually be multiple renderers displaying a different view of the same data on different computers all controlled by the same controller, hence the total decoupling of the input from the output (Making taking advantage of the built in X11 client/server stuff for display less useable) Also, multiple controller applications for one renderer is also possible. Communication between the controllers and renderers is via sockets.
OK, if you're under X11 and you want to get the kbd, you need to do a grab.
If you're not, my only good answer is ncurses from a terminal.
Here's how you grab everything from the keyboard and release again:
/* Demo code, needs more error checking, compile
* with "gcc nameofthisfile.c -lX11".
/* weird formatting for markdown follows. argh! */
#include <X11/Xlib.h>
int main(int argc, char **argv)
{
Display *dpy;
XEvent ev;
char *s;
unsigned int kc;
int quit = 0;
if (NULL==(dpy=XOpenDisplay(NULL))) {
perror(argv[0]);
exit(1);
}
/*
* You might want to warp the pointer to somewhere that you know
* is not associated with anything that will drain events.
* (void)XWarpPointer(dpy, None, DefaultRootWindow(dpy), 0, 0, 0, 0, x, y);
*/
XGrabKeyboard(dpy, DefaultRootWindow(dpy),
True, GrabModeAsync, GrabModeAsync, CurrentTime);
printf("KEYBOARD GRABBED! Hit 'q' to quit!\n"
"If this job is killed or you get stuck, use Ctrl-Alt-F1\n"
"to switch to a console (if possible) and run something that\n"
"ungrabs the keyboard.\n");
/* A very simple event loop: start at "man XEvent" for more info. */
/* Also see "apropos XGrab" for various ways to lock down access to
* certain types of info. coming out of or going into the server */
for (;!quit;) {
XNextEvent(dpy, &ev);
switch (ev.type) {
case KeyPress:
kc = ((XKeyPressedEvent*)&ev)->keycode;
s = XKeysymToString(XKeycodeToKeysym(dpy, kc, 0));
/* s is NULL or a static no-touchy return string. */
if (s) printf("KEY:%s\n", s);
if (!strcmp(s, "q")) quit=~0;
break;
case Expose:
/* Often, it's a good idea to drain residual exposes to
* avoid visiting Blinky's Fun Club. */
while (XCheckTypedEvent(dpy, Expose, &ev)) /* empty body */ ;
break;
case ButtonPress:
case ButtonRelease:
case KeyRelease:
case MotionNotify:
case ConfigureNotify:
default:
break;
}
}
XUngrabKeyboard(dpy, CurrentTime);
if (XCloseDisplay(dpy)) {
perror(argv[0]);
exit(1);
}
return 0;
}
Run this from a terminal and all kbd events should hit it. I'm testing it under Xorg
but it uses venerable, stable Xlib mechanisms.
Hope this helps.
BE CAREFUL with grabs under X. When you're new to them, sometimes it's a good
idea to start a time delay process that will ungrab the server when you're
testing code and let it sit and run and ungrab every couple of minutes.
It saves having to kill or switch away from the server to externally reset state.
From here, I'll leave it to you to decide how to multiplex renderes. Read
the XGrabKeyboard docs and XEvent docs to get started.
If you have small windows exposed at the screen corners, you could jam
the pointer into one corner to select a controller. XWarpPointer can
shove the pointer to one of them as well from code.
One more point: you can grab the pointer as well, and other resources. If you had one controller running on the box in front of which you sit, you could use keyboard and mouse input to switch it between open sockets with different renderers. You shouldn't need to resize the output window to less than full screen anymore with this approach, ever. With more work, you could actually drop alpha-blended overlays on top using the SHAPE and COMPOSITE extensions to get a nice overlay feature in response to user input (which might count as gilding the lily).
For the mouse you can use GPM.
I'm not sure off the top of my head for keyboard or joystick.
It probably wouldn't be too bad to read directly off there /dev files if need be.
Hope it helps