Speaking with an ISpVoice from a ISpTTSEngine - visual-c++

I'm implementing an ISpTTSEngine for the Microsoft Speech API (SAPI). I'd like for
this voice to annunciate just like a typical TTS voice. Rather than write my
own speech synthesizer, I'd like to delegate to a built-in ISpVoice.
I've written enough code to hear text vocalized, but it has a major deficiency
that I haven't been able to explain: the speech does not begin until after my
implementation of ISpTTSEngine:Speak has returned. For the duration of the
audible output, my implementation of ISpTTSEngine:Speak is not invoked, even
when the software using the TTS voice is sending requests.
(For context: my goal for this project is to programmatically observe the speech data that other pieces
of software are attempting to vocalize. That part appears to be working as
intended.)
The full source is available
here. I'll try to
summarize with the most relevant parts.
My implementation of ISpTTSEngine has a private member named
m_cpVoice:
class ATL_NO_VTABLE CTTSEngObj :
public CComObjectRootEx<CComMultiThreadModel>,
public CComCoClass<CTTSEngObj, &CLSID_SampleTTSEngine>,
public ISpTTSEngine,
public ISpObjectWithToken
{
// ...
private:
CComPtr<ISpVoice> m_cpVoice;
And it is initialized in the FinalConstruct
method:
HRESULT CTTSEngObj::FinalConstruct()
{
HRESULT hr = S_OK;
// ...
hr = m_cpVoice.CoCreateInstance(CLSID_SpVoice);
My implementation of ISpTTSEngine:Speak iterates over the text fragments it
receives
and passes the text data to the ISpVoice::Speak
method:
STDMETHODIMP CTTSEngObj::Speak(DWORD dwSpeakFlags,
REFGUID rguidFormatId,
const WAVEFORMATEX* pWaveFormatEx,
const SPVTEXTFRAG* pTextFragList,
ISpTTSEngineSite* pOutputSite)
{
// ...
for (const SPVTEXTFRAG* textFrag = pTextFragList; textFrag != NULL; textFrag = textFrag->pNext)
{
// ...
const std::wstring& text = textFrag->pTextStart;
hr = m_cpVoice->Speak(text.substr(0, textFrag->ulTextLen).c_str(), dwSpeakFlags | SPF_ASYNC | SPF_PURGEBEFORESPEAK, 0);
As mentioned above, no audio is emitted until after ISpTTSEngine:Speak
returns. An arbitrary sleep statement demonstrates this most clearly. Polling
the ISpVoice's SpeakCompleteEvent handle inevitably times out. Removing the
SPF_ASYNC flag from the invocation of ISpVoice::Speak causes the caller to
crash.
Can anyone explain this behavior? Or suggest a change that would allow me to
observe subsequent speech requests?

SAPI isn't expecting to be entered recursively. Consider using a different TTS engine (e.g., the WinRT System.Media.SpeechSynthesis APIs) to do the actual synthesis. The text fragments won't have any embedded markup, so that won't be a big deal.

Related

Sink component doesn't get the right data with kafka in spring cloud data flow

I am not a native English speaker but I try to express my question as clear as possible.
I encountered this problem which has confused me for two days and I still can't find the solution.
I have built a stream which will run in the Spring Could Data Flow in the Hadoop YARN.
The stream is composed of Http source,processor and file sink.
1.Http Source
The HTTP Source component has two output channels binding with two different destinations which are dest1 and dest2 defined in the application.properties.
spring.cloud.stream.bindings.output.destination=dest1
spring.cloud.stream.bindings.output2.destination=dest2
Below is the code snipet for HTTP source for your reference..
#Autowired
private EssSource channels; //EssSource is the interface for multiple output channels
##output channel 1:
#RequestMapping(path = "/file", method = POST, consumes = {"text/*", "application/json"})
#ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest(#RequestBody byte[] body, #RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
logger.info("enter ... handleRequest1...");
channels.output().send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
##output channel 2:
#RequestMapping(path = "/test", method = POST, consumes = {"text/*", "application/json"})
#ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest2(#RequestBody byte[] body, #RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
logger.info("enter ... handleRequest2...");
channels.output2().send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
2. Processor
The processor has two multiple input channels and two output channels binding with different destinations.
The destination binding is defined in application.properties in processor component project.
//input channel binding
spring.cloud.stream.bindings.input.destination=dest1
spring.cloud.stream.bindings.input2.destination=dest2
//output channel binding
spring.cloud.stream.bindings.output.destination=hdfsSink
spring.cloud.stream.bindings.output2.destination=fileSink
Below is the code snippet for Processor.
#Transformer(inputChannel = EssProcessor.INPUT, outputChannel = EssProcessor.OUTPUT)
public Object transform(Message<?> message) {
logger.info("enter ...transform...");
return "processed by transform1";;
}
#Transformer(inputChannel = EssProcessor.INPUT_2, outputChannel = EssProcessor.OUTPUT_2)
public Object transform2(Message<?> message) {
logger.info("enter ... transform2...");
return "processed by transform2";
}
3. The file sink component.
I use the official fil sink component from Spring.
maven://org.springframework.cloud.stream.app:file-sink-kafka:1.0.0.BUILD-SNAPSHOT
And I just add the destination binding in its applicaiton.properties file.
spring.cloud.stream.bindings.input.destination=fileSink
4.Finding:
The data flow I expected should like this:
Source.handleRequest() -->Processor.handleRequest()
Source.handleRequest2() -->Processor.handleRequest2() --> Sink.fileWritingMessageHandler();
Should only the string "processed by transform2" is saved to the file.
But after my testing, the data flow is actual like this:
Source.handleRequest() -->Processor.handleRequest() --> Sink.fileWritingMessageHandler();
Source.handleRequest2() -->Processor.handleRequest2() --> Sink.fileWritingMessageHandler();
Both the "processed by transform1" and "processed by transform2" string are saved to the file.
5.Question:
Although the destination for the output channel in Processor.handleRequest() binds to hdfsSink instead of fileSink,the data still flows to file Sink. I can't understand this and this is not what I want.
I only want the data from Processor.handleRequest2() flows to file sink instead of both.
If I don't do it right, could anyone tell me how to do it and what is the solution?
It has been confused me for 2 days.
Thanks you for your kindly help.
Alex
Is your stream definition something like this (where the '-2' versions are the ones with multiple channels) ?
http-source-2 | processor-2 | file-sink
Note that Spring Cloud Data Flow will override the destinations defined in applications.properties which is why, even if spring.cloud.stream.bindings.output.destination for the processor is set to hdfs-sink, it will actually match the input of file-sink.
The way destinations are configured from a stream definition is explained here (in the context of taps): http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-stream-tap-dsl
What you can do is to simply swap the meaning of channel 1 and 2 - use the side channel for hdfs. This is a bit brittle though - as the input/output channels of the Stream will be configured automatically and the other channels will be configured via application.properties - in this case it may be better to configure the side channel destinations via stream definition or at deployment time - see http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_application_properties.
It seems to me that these could be just as well be 2 streams listening to separate endpoints, using regular components - given that data is supposed to be flowing side by side.

Rendering video in Direct3D obtained from Media Foundation efficiently

I want to use live video I am decoding from media foundation efficiently.
Originally, I was running the render functions synchronously after decoding each frame. The incoming framerate is of around 25-30 fps, but I would like to render the graphics (game) content at 60fps.
If I do it asynchronously I will either get corrupted output / black screens / both or very low framerate due to aggressive locking. Since the GPU operations are async I haven't been able to find a reasonable critical section. How is this normally done? I can use one of my temporary surfaces (source, dest, or g_pDecodedTexture) as a synchronization point and surround writes to it/them with a CRITICAL_SECTION, but I don't know where the critical section should go on the render (reading) thread. If I surround the whole render function, my framerate is very low, and if I don't I get incorrect output. Maybe there is another more appropriated method for synchronization.
At render setup time
hr = g_d3dDevice->CreateShaderResourceView(g_pDecodedTexture, &shaderResourceViewDesc, &g_pTextureRV);
In the decode thread
void Decode()
{
MFT_OUTPUT_DATA_BUFFER output = { 0 };
//...
encoder->ProcessOutput(0,1,&output,&status);
//
CComPtr<IMFMediaBuffer> spMediaBuffer;
CComPtr<IMFDXGIBuffer> spDXGIBuffer;
CComPtr<IDXGIResource> spDecodedTexture;
output.pSample->GetBufferByIndex(0, &spMediaBuffer);
spMediaBuffer->QueryInterface(IID_PPV_ARGS(&spDXGIBuffer);
spDXGIBuffer->GetResource(IID_PPV_ARGS(&spDecodedTexture);
//....
CComPtr<ID3D11Texture2D> source;
spDXGIBuffer->QueryInterface<ID3D11Texture2D>(&source);
//
CComPtr<ID3D11Resource> dest;
swapChain->GetBuffer(0, __uuidof(ID3D11Resource), (void**)&dest);
deviceContext->CopyResource(dest, source);
deviceContext->CopyResource(g_pDecodedTexture, source);
}
In the render thread
void Render()
{
//...
deviceContext->PSSetShaderResources(0, 1, &g_pTextureRV);
//..
m_deviceContext->VSSetShaderResources(0, 1, &g_pTextureRV);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//
Present();
}
You can try this : insert the Frame Rate Converter DSP after the decoder. Be sure your input format is compatible with the DSP. Set frame rate at 60 fps.
Doing this, i think you can keep the synchronous approach.
If you want to manually display at 60 fps, we need more code to see where the problem comes from.

Can I use HaxeUI with HaxeFlixel?

I tried to use both HaxeUI and HaxeFlixel, but what I obtain is HaxeUI's interface over a white background, covering everything underneath. Moreover, even if it was possible to somewhat make HaxeUI and HaxeFlixel work together, it's not clear how to change the UI of HaxeUI when the state change in HaxeFlixel. Here is the code I used:
private function setupGame():Void {
Toolkit.theme = new GradientTheme();
Toolkit.init();
var stageWidth:Int = Lib.current.stage.stageWidth;
var stageHeight:Int = Lib.current.stage.stageHeight;
if (zoom == -1) {
var ratioX:Float = stageWidth / gameWidth;
var ratioY:Float = stageHeight / gameHeight;
zoom = Math.min(ratioX, ratioY);
gameWidth = Math.ceil(stageWidth / zoom);
gameHeight = Math.ceil(stageHeight / zoom);
}
trace('stage: ${stageWidth}x${stageHeight}, game: ${gameWidth}x${gameHeight}, zoom=$zoom');
addChild(new FlxGame(gameWidth, gameHeight, initialState, zoom, framerate, framerate, skipSplash, startFullscreen));
Toolkit.openFullscreen(function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-resource.xml");
root.addChild(view);
});
}
I can guess that, probably, both HaxeUI and HaxeFlixel have their own main loop and that their event handling might not be compatible, but just in case, can someone have a more definitive answer?
Edit:
Actually, it's much better when using openPopup:
Toolkit.openPopup( { x:20, y:150, width:100, height:100 }, function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-naming.xml");
root.addChild(view);
});
It's possible to interact with the rest of the screen (managed with HaxeFlixel), but the mouse pointer present in the part of the screen managed with HaxeFlixel remains under the HaxeUI user interface elements.
When using Flixel and HaxeUI together, its almost like running two applications at once. However, they both rely on OpenFL as a back-end and each attach themselves to its display tree.
One technique I'm experimenting with right now is to open a Flixel sub state, and within the sub state, call Toolkit.openFullscreen(). From inside of this, you can set the alpha of the root's background to 0, which allows you to see through it onto the underlying bitmap that Flixel uses to render.
Here is a minimal example of how you might "embed" an editor interface inside a Flixel sub state:
import haxe.ui.toolkit.core.Toolkit;
import haxe.ui.toolkit.core.RootManager;
import haxe.ui.toolkit.themes.DefaultTheme;
import flixel.FlxG;
import flixel.FlxSubState;
// This would typically be a Haxe UI XMLController
import app.MainEditor;
class HaxeUIState extends FlxSubState
{
override public function create()
{
super.create();
// Flixel uses a sprite-based cursor by default,
// so you need to enable the system cursor to be
// able to see what you're clicking.
FlxG.mouse.useSystemCursor = true;
Toolkit.theme = new DefaultTheme();
Toolkit.init();
Toolkit.openFullscreen(function (root) {
var editor = new MainEditor();
// Allows you to see what's going on in the sub state
root.style.backgroundAlpha = 0;
root.addChild(editor.view);
});
}
override public function destroy()
{
super.destroy();
// Switch back to Flixel's cursor
FlxG.mouse.useSystemCursor = true;
// Not sure if this is the "correct" way to close the UI,
// but it works for my purposes. Alternatively you could
// try opening the editor in advance, but hiding it
// until the sub-state opens.
RootManager.instance.destroyAllRoots();
}
// As far as I can tell, the update function continues to get
// called even while Haxe UI is open.
override public function update() {
super.update();
if (FlxG.keys.justPressed.ESCAPE) {
// This will implicitly trigger destroy().
close();
}
}
}
In this way, you can associate different Flixel states with different Haxe UI controllers. (NOTE: They don't strictly have to be sub-states, that's just what worked best in my case.)
When you open a fullscreen or popup with haxeui, the program flow will be blocked (your update() and draw() function won't be called). You should probably have a look at flixel-ui instead.
From my experience haxeflixel and haxeui work well together but they are totally independent projects, and as such, any coordination between flixel states and displayed UI must be added by the coder.
I don't recall having the white background problem you mention, it shouldn't happen unless haxeui root sprite has a solid background, in that case it should be addressed to haxeui project maintainer.

How to implement output cache for a content part (such as a widget)?

I have a widget with list of last news, how to cache only widget output?
OutputCache module caches whole page and for anonymous users, but in fact I need to cache only one shape output.
What solution can be here?
It's not a good idea to cache the Shape object itself, but you can capture the HTML output from a Shape and cache that.
Every Orchard Shape has a corresponding object called the Metadata. This object contains, among other things, some event handlers that can run when the Shape is displaying or after it has been displayed. By using these event handlers, it is possible to cache the output of the Shape on the first call to a driver. Then for future calls to the driver, we can display the cached copy of the output instead of running through the expensive parts of the driver or template rendering.
Example:
using System.Web;
using DemoModule.Models;
using Orchard.Caching;
using Orchard.ContentManagement.Drivers;
using Orchard.DisplayManagement.Shapes;
namespace DemoModule.Drivers {
public class MyWidgetPartDriver : ContentPartDriver<MyWidgetPart> {
private readonly ICacheManager _cacheManager;
private readonly ISignals _signals;
public MyWidgetPartDriver(
ICacheManager cacheManager,
ISignals signals
) {
_cacheManager = cacheManager;
_signals = signals;
}
public class CachedOutput {
public IHtmlString Output { get; set; }
}
protected override DriverResult Display(MyWidgetPart part, string displayType, dynamic shapeHelper) {
return ContentShape("Parts_MyWidget", () => {
// The cache key. Build it using whatever is needed to differentiate the output.
var cacheKey = /* e.g. */ string.Format("MyWidget-{0}", part.Id);
// Standard Orchard cache manager. Notice we get this object by reference,
// so we can write to its field to save our cached HTML output.
var cachedOutput = _cacheManager.Get(cacheKey, ctx => {
// Use whatever signals are needed to invalidate the cache.
_signals.When(/* e.g. */ "ExpireCache");
return new CachedOutput();
});
dynamic shape;
if (cachedOutput.Output == null) {
// Output has not yet been cached, so we are going to build the shape normally
// and then cache the output.
/*
... Do normal (potentially expensive) things (call DBs, call services, etc.)
to prep shape ...
*/
// Create shape object.
shape = shapeHelper.Parts_MyWidget(/*...*/);
// Hook up an event handler such that after rendering the (potentially expensive)
// shape template, we capture the output to the cached output object.
((ShapeMetadata) shape.Metadata).OnDisplayed(displayed => cachedOutput.Output = displayed.ChildContent);
} else {
// Found cached output, so simply output it instead of building
// the shape normally.
// This is a dummy shape, the name doesn't matter.
shape = shapeHelper.CachedShape();
// Hook up an event handler to fill the output of this shape with the cached output.
((ShapeMetadata)shape.Metadata).OnDisplaying(displaying => displaying.ChildContent = cachedOutput.Output);
// Replacing the ChildContent of the displaying context will cause the display manager
// to simply use that HTML output and skip template rendering.
}
return shape;
});
}
}
}
EDIT:
Note that this only caches the HTML that is generated from your shape output. Things like Script.Require(), Capture(), and other side effects that you perform in your shape templates will not be played back. This actually bit me because I tried to cache a template that required its own stylesheet, but the stylesheets would only be brought in the first time.
Orchard supplies a service called the CacheManager, which is awesome and cool and makes caching super easy. It is mentioned in the docs, but it isn't a particularly helpful description of how to use it (http://docs.orchardproject.net/Documentation/Caching). Best place to see examples would be in the Orchard core code and third party modules such as Favicon and the twitter widgets (all of them one would hope).
Luckily other nice people have gone to the effort of searching orchards code for you and writing nice little blog posts about it. The developer of the LatestTwitter widget wrote a neat post: http://blog.maartenballiauw.be/post/2011/01/21/Writing-an-Orchard-widget-LatestTwitter.aspx . So did Richard of NogginBox: http://www.nogginbox.co.uk/blog/orchard-caching-by-time . And of course Bertrand has a helpful post on the subject as well: http://weblogs.asp.net/bleroy/archive/2011/02/16/caching-items-in-orchard.aspx

Dll making with C++

I'm been trying to start doing a plug-in for a program called "Euroscope" for quite some time and i still can't do anything. I even read a C++ book and nothing, it's too difficult to start.
The question i'm going to ask is a little bit specific and it's going to be difficult to explain but i'm tired of trying to solve this by my own so here it comes.
I have a class that i imported with a bunch of function prototypes in the header called "EuroscopePlugIn".
My principal .cpp is this:
void CPythonPlugInScreen::meu()
{
//loop over the planes
EuroScopePlugIn::CAircraft ac;
EuroScopePlugIn::CAircraftFlightPlan acfp;
CString str;
CPythonPlugIn object;
for(ac=GetPlugIn()->AircraftSelectFirst();
ac.IsValid();
ac=GetPlugIn()->AircraftSelectNext(ac))
{
EuroScopePlugIn::CAircraftPositionData acpos=ac.GetPosition();
const char *c=ac.GetCallsign();
object.printtofile_simple_char(*c);
object.printtofile_simple_int(ac.GetState());
};
object.printtofile_simple_int(ac.GetVerticalSpeed());
object.printtofile_simple_int(acfp.GetFinalAltitude());
cout<<acfp.GetAlternate();
}
the "printtofile_simple_int" and "printtofile_simple_char" are defined is the class CPythonPlugIn like this:
void printtofile_simple_int(int n){
ofstream textfile;
textfile.open("FP_simple_int.txt");
textfile<<(n);
textfile.close();
So i open the program, load the .dll i created with Build->Solution and it does nothing, the .txt files aren't even created and even the cout produces nothing.
I will give you some of the prototype infos on the header file "EuroScopePlugIn.h" in case you need them to understand my micro program. If you need other,ask me and i'll put it here
//---GetPlugIn-----------------------------------------------------
inline CPlugIn * GetPlugIn ( void )
{
return m_pPlugIn ;
} ;
&
CAircraft AircraftSelectFirst ( void ) const ;
//-----------------------------------------------------------------
// Return :
// An aircraft object instance.
//
// Remark:
// This instance is only valid inside the block you are querying.
// Do not save it to a static place or into a member variables.
// Subsequent use of an invalid extracted route reference may
// cause ES to crash.
//
// Description :
// It selects the first AC in the list.
//-----------------------------------------------------------------
&
int GetFinalAltitude ( void ) const ;
//-----------------------------------------------------------------
// Return :
// The final requested altitude.
//-----------------------------------------------------------------
Please guys i need help to start with the plug-in making, from that point on with a methodology of trial and error i'll be on my way. I'm just finding it extremely hard to start...
Thank you very much for the help

Resources