DirectXTK make_unique<AudioEngine>(flags) fails - audio

I am just about ready to release my first little game with my game engine. However, through having some people test it, we found that the call to acquire the pointer to the AudioEngine interface fails for one of my testers.
The call works fine for me on both my desktop and my laptop, and it works fine on tester 2's computer. However, tester 1's computer will not succeed on the call.
By "fails" I mean it throws an exception which I am handling with a try/catch block. The "what" of the exception just tells me "AudioEngine" so no help there. He has a heavily customized computer which utilizes two graphics cards which are not linked and handle separate tasks. He uses the same Virtual Audio Cable set up that I have on my Desktop (used to separate voice sources for easier video editing re: streaming/game recording).
If anyone has any clue what might cause this call to fail, we would greatly appreciate the information. Please let me know if you require any additional information. Code for initialization is below:
//aud engine declaration is in the header for the class
unique_ptr<AudioEngine> audEngine;
//function being called
bool AudioEngineClass::InitializeAudioEngine()
{
//Call this to create the DXTK Audio Engine
//Setup flags:
AUDIO_ENGINE_FLAGS eflags = AudioEngine_Default;
eflags = eflags | AudioEngine_EnvironmentalReverb | //Enables environmental reverb for 3D (required for 3D audio)
AudioEngine_ReverbUseFilters | //Enables additional features for 3D positional audio reverb
AudioEngine_UseMasteringLimiter; //Enables a mastering volume limiter to avoid distortion and clipping with 3D audio.
//MessageBox(NULL, "Attempting to assign AudioEngine Pointer", "AudioEngine.InitializeAudioEngine", MB_OK);
try
{
audEngine = make_unique<AudioEngine>(eflags);
}
catch(exception& e)
{
//Tester 1 falls into this
string exceptionStr = e.what();
string outputStr = "Failed to Initialize Audio Engine. Exception: \n";
outputStr += exceptionStr;
MessageBox(NULL, outputStr.c_str(), "AudioEngine.InitializeAudioEngine", MB_OK);
}
//MessageBox(NULL, "Got past AudioEngine Pointer Assignment", "AudioEngine.InitializeAudioEngine", MB_OK);
if (!audEngine)
{
//failed to create audio engine.
initialized = false;
}
else
{
initialized = true;
}
return initialized;
}
UPDATE: Been trying stuff all day with no luck so far.
-Had Tester download the June 2010 DirectX DLLs and install them and restart their computer.
-Had them update all of their drivers.
-Had them check their System folder (all XAudio2_#.dll files are present).
-They have Windows 10

Related

Can I trigger the Hololens Calibration sequence from inside my application?

I have a hololens app I am creating that requires the best accuracy possible for hologram placement. This application will be used by numerous individuals. Whenever I try to show the application progress, I have to have the user go through the calibration process, otherwise the holograms appear to have way too much drift.
I would like to be able to call the hololens calibration process automatically when the application opens. Later, after I set up user authentication and id management, I will call the calibration process when a new user is found.
https://learn.microsoft.com/en-us/windows/mixed-reality/calibration
I have looked into the calibration (via the above documentation and elsewhere) and it seems that all it is setting is IPD. However the alternative solutions I have found that allow for dynamic ipd adjustment appear to be invalid for UWP Store apps. This makes them unusable for me.
I am looking for any help or direction, or if this is even possible. Thank you.
Yes, it is possible to to this, you need to use the LaunchUriAsync protocol to launch the following URI: ms-hololenssetup://EyeTracking
Here is an example implementation, obtained from the LaunchUri example in MRTK
public void LaunchEyeTracking()
{
#if WINDOWS_UWP
UnityEngine.WSA.Application.InvokeOnUIThread(async () =>
{
bool result = await global::Windows.System.Launcher.LaunchUriAsync(new System.Uri("ms-hololenssetup://EyeTracking"));
if (!result)
{
Debug.LogError("Launching URI failed to launch.");
}
}, false);
#else
Debug.LogError("Launching eye tracking not supported Windows UWP");
#endif
}

Clicking Sounds When Playing Clips in Rapid Succession

I have a very simple program that plays 4 different tones, depending on what button is pressed. I have found that if I play multiple tones or the same tone in rapid succession, there are unpleasant clicking noises produced. I have made sure that these clicks are not present in my audio samples; it is definitely caused by playing the clips quickly one after another.
After googling around, I'm fairly sure that the clicks are due to the rapid change in pitch between clips. Looking at the waveform of the playback from the offending audio, it looks like a clip is first cancelled for a fraction of a second before starting the next clip. I have highlighted the section where this seems particularly obvious.
The clip that showcases these audio clicks can also be downloaded here.
My code is very simple. I am using XInput to read input from a connected controller, which determines the tone to play, and I am using WinMM to output sound from wav files. It is written in the D programming language, but I have modified it to use no D-specific features to make it as C-like as possible and to avoid confusion.
SHORT keyPressed(int vkey)
{
enum highBit { val = 0x8000 }
return cast(SHORT)(GetKeyState(vkey) & highBit.val);
}
enum Button
{
DPAD_UP = 0x0001,
DPAD_DOWN = 0x0002,
DPAD_LEFT = 0x0004,
DPAD_RIGHT = 0x0008,
START = 0x0010,
BACK = 0x0020,
LEFT_THUMB = 0x0040,
RIGHT_THUMB = 0x0080,
LEFT_SHOULDER = 0x0100,
RIGHT_SHOULDER = 0x0200,
A = 0x1000,
B = 0x2000,
X = 0x4000,
Y = 0x8000,
}
struct XINPUT_GAMEPAD
{
WORD wButtons;
BYTE bLeftTrigger;
BYTE bRightTrigger;
SHORT sThumbLX;
SHORT sThumbLY;
SHORT sThumbRX;
SHORT sThumbRY;
}
struct XINPUT_STATE
{
DWORD dwPacketNumber;
XINPUT_GAMEPAD Gamepad;
bool isPressed(int button)
{
return cast(bool)(Gamepad.wButtons & button);
}
}
int main()
{
HANDLE xinputDLL = initXinput();
XINPUT_STATE oldState;
XINPUT_STATE newState;
while (!keyPressed(VK_ESCAPE))
{
oldState = newState;
XInputGetState(0, &newState);
enum flags { val = SND_ASYNC | SND_FILENAME | SND_NODEFAULT }
if (newState.isPressed(Button.A) && !oldState.isPressed(Button.A))
{
PlaySoundA(toStringz("Piano.ff.A4.wav"), null, flags.val);
}
if (newState.isPressed(Button.B) && !oldState.isPressed(Button.B))
{
PlaySoundA(toStringz("Piano.ff.B4.wav"), null, flags.val);
}
if (newState.isPressed(Button.X) && !oldState.isPressed(Button.X))
{
PlaySoundA(toStringz("Piano.ff.C5.wav"), null, flags.val);
}
if (newState.isPressed(Button.Y) && !oldState.isPressed(Button.Y))
{
PlaySoundA(toStringz("Piano.ff.F4.wav"), null, flags.val);
}
}
denitXinput(xinputDLL);
return 0;
}
Assuming that I'm correct in regards to the source of the clicking sounds, I think the solution is to have each sample fade into the next one. However, I am not sure how to do this as the WinMM documentation seems relatively sparse, and I am inexperienced with it.
Is the solution to my problem of clicks when playing audio samples to have each sample fade into the next one? If so, how can I accomplish this using WinMM? If not, is there another solution that I can try?
I know how we can solve this in theory, but I don't have actual working code yet for all cases. (When I do, I'll edit this.)
First, the simple case which kinda works: instead of using PlaySound, try mciSendStringA:
if(auto err = mciSendStringA("play test.wav", null, 0, null))
writeln(err);
I am not making that up, Windows actually has that function, and it actually works with a lot of little command strings and file formats (though if your program terminates, all sound stops, so make sure the program keeps running e.g. stay in your controller loop or call Sleep(something)).
I've used a lot of Win32 and sometimes I'm amazed by how much stuff it has. Prototype:
extern(Windows) uint mciSendStringA(in char*,char*,uint,void*);
found in winmm.lib.
That basically works, but in my test, playing the same file twice at the same time has no effect. Playing different files together mixes them though. So it is a partial solution.
Next step from that would be to use the mciSendCommand function - a bit lower level than send string, so you can open multiple devices and try to get more overlap that way:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743675%28v=vs.85%29.aspx
I haven't tried this yet, but it looks fairly simple and I suspect it might be good enough for you. Open up a few devices for each button so you can hit them a few times fast and it cycles through them, hopefully mixing the same sound more than once when needed.
The prototype to that is:
extern(Windows) uint /*MCIERROR*/ mciSendCommandA(MCIDEVICEID,UINT,DWORD,DWORD);
Yes, it casts to void* then to DWORD in the msdn example. Blargh. Relevant structs:
struct MCI_OPEN_PARMSA {
DWORD dwCallback;
MCIDEVICEID wDeviceID; // aka uint
LPCSTR lpstrDeviceType;
LPCSTR lpstrElementName;
LPCSTR lpstrAlias;
}
struct MCI_PLAY_PARMS {
DWORD dwCallback;
DWORD dwFrom;
DWORD dwTo;
}
and you can borrow some constants from here too:
https://github.com/AndrejMitrovic/DWinProgramming/blob/master/WindowsAPI/win32/mmsystem.d#L693
(if you are already using the win32 bindings, great! But I think they are kinda a pain for little things so I try to avoid them, preferring to copy/paste prototypes+structs+constants off MSDN as I need them.)
You should be able to get the MSDN example working with those definitions and core.sys.windows.windows. Don't forget pragma(lib, "winmm"); too.
I think a full solution that will certainly work, but is also quite a bit harder, will be using the low level interface to mix the sounds yourself as they happen and send that result to the device. I don't have this working yet and I'm out of time today, but hopefully I can get something to you tomorrow.
The basic steps are:
1) call waveOutOpen to get a device. Set up a callback function which it calls when it needs more data.
2) prepare a buffer - or perhaps more than one - with waveOutPrepareHeader
3) feed data with waveOutWrite when requested by your callback (might want this in a separate thread) with the current notes. Mixing two samples is simply a case of adding the values together (and clipping if they overflow - sounds awful btw but hopefully that won't actually happen) so if you are doing more than one sound, just add them as you go.
Don't forget extern(Windows) on any callback function!
4) Loading your samples probably means reading the .wav file. That's not super hard, Windows has helper functions or you can do it yourself. I'll show code for this too.
What I have so far is in my simpleaudio.d https://github.com/adamdruppe/arsd/blob/master/simpleaudio.d find struct AudioOutput and the WinMM version. It has a horrible API right now that must be radically changed - it was acceptable on Linux but sucks on Windows. A callback feeder instead of write(data) should work better on both platforms, so that's what I'll do.
Problem I'm having with the demo right now is gaps between buffers... leading to clicky sounds. Yeah. But I'm sure it is just latency that should be solved with the proper callback approach and buffer sizing.
That MCI function might work for you as a next step though, maybe even a final step if the multiple devices works.
BTW: you could also prolly make it do MIDI commands instead of playing wavs and get all kinds of cool stuff. Simpleaudio.d's low level midi is already functioning - the demo main even shows a piano scale. Rigging it into the xbox controller shouldn't be too hard... note on when the button is pressed, note off when released, and not even think about timing.. Not really an answer to the question but a cool thing to play with in the same vein!

IMFMediaPlayer hangs during SetSourceFromByteStream

Background: I'm coding a metro-styled app for Win8. I need to be able to play music-file. Because of quality and space requirements we're using encoded audio (mp3/ogg).
I'm using XAudio2 to play sound effects (.wav files), but since I couldn't figure out a way to play encoded audio with it, I decided to play the music files with Media Foundation (IMFMediaPlayer interface).
I downloaded metro apps sample, and found out that the Media Engine Native C++ video playback sample was closest to what I needed.
Now that my app has MediaPlayer playing musics, I ran into a problem. If the device running the app is slow enough, MediaPlayer hangs. When I'm running the release-version of the app on my device, it's fine and I can hear the music just fine. But when I attach the debugger or run it on a slower device, it hangs when I'm setting bytestream for the MediaPlayer to play.
Here's some code, you'll find it pretty similiar to the sample:
StorageFolder^ installedLocation = Windows::ApplicationModel::Package::Current->InstalledLocation;
m_pickFileTask = Concurrency::task<StorageFile^>(installedLocation->GetFileAsync(filename)), m_tcs.get_token());
auto player = this;
m_pickFileTask.then([player](StorageFile^ fileHandle)
{
player->SetURL(fileHandle->Path);
Concurrency::task<IRandomAccessStream^> fOpenStreamTask = Concurrency::task<IRandomAccessStream^> (fileHandle->OpenAsync(Windows::Storage::FileAccessMode::Read));
fOpenStreamTask.then([player](IRandomAccessStream^ streamHandle)
{
MEDIA::ThrowIfFailed(
player->m_spMediaEngine->Pause()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
player->SetBytestream(streamHandle);
if (player->m_spMediaEngine)
{
MEDIA::ThrowIfFailed(
player->m_spEngineEx->Play()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
}
}
);
}
);
And here's the SetBytestream method:
SetBytestream(IRandomAccessStream^ streamHandle)
{
if(m_spMFByteStream != nullptr)
{
m_spMFByteStream->Close();
m_spMFByteStream = nullptr;
}
MEDIA::ThrowIfFailed(
MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &m_spMFByteStream)
);
MEDIA::ThrowIfFailed(
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
);
MEDIA::GetMediaError(m_spEngineEx);
return;
}
The line where it hangs is:
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
When I'm debugging the app, I can press pause and see the stack. Well, not much of it, but atleast I can see it that it's indefinitely at
ntdll.dll!77b7f4dc()
Any ideas why my app would hang in such a way?
(OPTIONAL: If you know a better way to play mp3/ogg in a c++ metro-styled app, let me know)
Could not figure out why this is happening, but I managed to code a work-a-round:
IMFSourceReader can be used to decode MP3s and feed bytes into XAudio2SourceVoice.
XAudio2 audio stream effect sample contains good example how to do this.

Is Adobe Media Encoder scriptable with ExtendScript?

Is Adobe Media Encoder (AME) Scriptable? I've heard people mention it was "officially scriptable" but I can't find any reference to its scriptable object set.
Has anyone had any experience scripting AME?
Adobe media encoder is 'officially' not scriptable but we can use extend script API for scripting AME.
Below functions are available through extend script
1.Adding a file to batch
Encode progress
host = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.encodeEncodeProgress
= function (progress) {
$ .writeln (progress);
}
eHost. enc.encode ("/ Users / test / Desktop / 00000.MTS", "/Users/test/Desktop/0.mov");
} else {
alert ("The preset could not be loaded ");
}
encode end
ehost = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.onEncodeFinished
= function (success) {
if (success) {
alert ("Successfully encoding has ended ");
} Else {
Alert (" failed to encode ");
}
}
EHost.RunBatch ();
} Else {
Alert (" preset could not be read ");
}
2.Start batch
eHost = app.getEncoderHost ();
eHost.runBatch ();
3.Stop batch
eHost = app.getEncoderHost ();
eHost.stopBatch ();
4.Pause batch
eHost = app.getEncoderHost ();
eHost.pauseBatch ();
5.Getting preset formats
EHost = App.GetEncoderHost ();
List = EHost.GetFormatList ();
6.getting presets
eHost = app.getEncoderHost ();
enc = eHost.createEncoderForFormat ("QuickTime");
list = enc.getPresetList ();
and many more...
The closest bits of info I've found are:
http://www.openspc2.org/book/MediaEncoderCC/
The latter resource is actually good, if you can read japanese, or at least use the Chrome built-in translate function, then you can see it has resources such as this one:
http://www.openspc2.org/book/MediaEncoderCC/easy/encodeHost/009/index.html
We can perform almost all basic functionalities through script.
I had a similar question about Soundbooth.. I haven't tried scripting Adobe Media Encoder though, it doesn't show up in the list of applications I could potentially connect to and script with the ExtendScript Toolkit.
I did find this article that might come in handy if you're on a Windows. I guess using something similar written in AppleScript could do the job on a OSX. I haven't tried it, but this Sikuli thing looks nice, maybe it could help with the job.
Adobe Media Encoder doesn't seem to be scriptable. I was wondering, for batch converting, could you use ffmpeg ? There seem to be a few scripts out there for that, if you google for ffmpeg batch flv.
HTH,
George
Year 2021
Yes, AME is scriptable in ExtendScript. AME API doc can be found at
https://ame-scripting.docsforadobe.dev/index.html.
The API methods can be invoked locally inside AME or remotely through BridgeTalk.
addCompToBatch and other alternatives in the API doc seem to be safe to use. This is working:
app.getFrontend().addCompToBatch(project, preset, destination);
The method requires project to be structured so that 1 and only 1 comp is at the root of the project.
encoder.encode – references of which can be found in Web, supposed to support encode progress callbacks - is not available in AME 2020 and 2021. As a result, this is not working:
var encoder = app.getEncoderHost().createEncoderForFormat(encoderFormat);
var res = encoder.loadPreset(encoderPreset);
if(res){
encoder.encode(project, destination); // error: encode is not a function
}
The method seems to have been removed in AME 2017.1, according to the post reporting the issue https://community.adobe.com/t5/adobe-media-encoder-discussions/media-encoder-automation-system-with-using-extendscript/td-p/9344018
The official stance at the moment is "no", but if you open the Adobe Extend Script Toolkit, and set the target app to Media Encoder, you will see in the Data Browser that a few objects and methods are already exposed in the app object, like app.getFrontend(), app.getEncoderHost() etc. There is no official documentation though, and no support, so you are free to experiment with them at your own risk.
You can use the ExtendScript reflection interface like this:
a = app.getFrontend()
a.reflect.properties
a.reflect.methods
a.reflect.find("addItemToBatch").description
But as far as I can see, no meaningful information can be found this way beyond list of methods and properties.
More about the ExtendScript reflect interface can be found in the JavaScript Tools Guide CC document.
Doesn't seem to be. There're some reference to it being somewhat scriptable via using FCP XML yet it's not "scriptable" in its accepted form.
Edit, it looks like they finally got their finger out and made ME scriptable: https://stackoverflow.com/a/69203537/432987
I got here after it came second in the duckduckgo results for "extendscript adobe media encoder". First was a post on the Adobe forums where an adobe staffer wrote:
Scripting in Adobe Media Encoder is not a supported feature.
and, just to give the finger to anyone seeking to develop solutions for adobe users using adobe's platform:
Also, this is a user-to-user forum, not an official channel for support from Adobe personnel.
I think the answer is "Adobe says no"

Blackberry Audio Recording Sample Code

Does anyone know of a good repository to get sample code for the BlackBerry? Specifically, samples that will help me learn the mechanics of recording audio, possibly even sampling it and doing some on the fly signal processing on it?
I'd like to read incoming audio, sample by sample if need be, then process it to produce a desired result, in this case a visualizer.
RIM API contains JSR 135 Java Mobile Media API for handling audio & video content.
You correct about mess on BB Knowledge Base. The only way is browse it, hoping they'll not going to change site map again.
It's Developers->Resources->Knowledge Base->Java API's&Samples->Audio&Video
Audio Recording
Basically it's simple to record audio:
create Player with correct audio encoding
get RecordControl
start recording
stop recording
Links:
RIM 4.6.0 API ref: Package javax.microedition.media
How To - Record Audio on a BlackBerry smartphone
How To - Play audio in an application
How To - Support streaming audio to the media application
How To - Specify Audio Path Routing
How To - Obtain the media playback time from a media application
What Is - Supported audio formats
What Is - Media application error codes
Audio Record Sample
Thread with Player, RecordControl and resources is declared:
final class VoiceNotesRecorderThread extends Thread{
private Player _player;
private RecordControl _rcontrol;
private ByteArrayOutputStream _output;
private byte _data[];
VoiceNotesRecorderThread() {}
private int getSize(){
return (_output != null ? _output.size() : 0);
}
private byte[] getVoiceNote(){
return _data;
}
}
On Thread.run() audio recording is started:
public void run() {
try {
// Create a Player that captures live audio.
_player = Manager.createPlayer("capture://audio");
_player.realize();
// Get the RecordControl, set the record stream,
_rcontrol = (RecordControl)_player.getControl("RecordControl");
//Create a ByteArrayOutputStream to capture the audio stream.
_output = new ByteArrayOutputStream();
_rcontrol.setRecordStream(_output);
_rcontrol.startRecord();
_player.start();
} catch (final Exception e) {
UiApplication.getUiApplication().invokeAndWait(new Runnable() {
public void run() {
Dialog.inform(e.toString());
}
});
}
}
And on thread.stop() recording is stopped:
public void stop() {
try {
//Stop recording, capture data from the OutputStream,
//close the OutputStream and player.
_rcontrol.commit();
_data = _output.toByteArray();
_output.close();
_player.close();
} catch (Exception e) {
synchronized (UiApplication.getEventLock()) {
Dialog.inform(e.toString());
}
}
}
Processing and sampling audio stream
In the end of recording you will have output stream filled with data in specific audio format. So to process or sample it you will have to decode this audio stream.
Talking about on the fly processing, that will be more complex. You will have to read output stream during recording without record commiting. So there will be several problems to solve:
synch access to output stream for Recorder and Sampler - threading issue
read the correct amount of audio data - go deep into audio format decode to find out markup rules
Also may be useful:
java.net: Experiments in Streaming Content in Java ME by Vikram Goyal
While not audio specific, this question does have some good "getting started" references.
Writing Blackberry Applications
I spent ages trying to figure this out too. Once you've installed the BlackBerry Component Packs (available from their website), you can find the sample code inside the component pack.
In my case, once I had installed the Component Packs into Eclipse, I found the extracted sample code in this location:
C:\Program
Files\Eclipse\eclipse3.4\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\samples
Unfortunately when I imported all that sample code I had a bunch of compile errors. To workaround that I just deleted the 20% of packages with compile errors.
My next problem was that launching the Simulator always launched the first sample code package (in my case activetextfieldsdemo), I couldn't get it to run just the package I am interested in. Workaround for that was to delete all the packages listed alphabetically before the one I wanted.
Other gotchas:
-Right click on the project in Eclipse and select Activate for BlackBerry
-Choose BlackBerry -> Build Configurations... -> Edit... and select your new project so it builds.
-Make sure you put your BlackBerry source code under a "src" folder in the Eclipse project, otherwise you might hit build issues.

Resources