Is there a way of loading an SVG drawing asynchronously with SkiaSharp? - svg

I am using SkiaSharp to load an SVG drawing. It's a site plan, so is reasonably complex, and it takes a long time to load. On my Samsung Galaxy phone, it's about 3 seconds, and during that time the phone completely locks up, which is quite unacceptable.
I am using SkiaSharp.Extended.Svg.SKSvg, but cannot find an asynchronous version of the Load() method. Is there one? Or maybe a better way of doing this?
I am overlaying obejcts on top of the site plan and it has taken me some considerable time to get all the scaling and alignment sorted, so if at all possible, I'd like to stick with SkiaSharp rather than start with something completely different from scratch.
Thanks for your help!

3 seconds does sound a bit long...
It may not be the SVG part, but rather the loading of the file off the file system.
Either way, you might be able to just load the whole think in a background task (pseudocode):
var picture = Task.Run(() => {
var stream = LoadStream();
var svg = LoadSvg(stream);
return svg.GetPicture();
});

Related

Why GBuffers need to be created for each frame in D3D12?

I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.

Programmatic access to a sound played through OpenAL

I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)

Allegro sound not working (playing wav file)

I'm making a game for a school project and I have a sound effect that is supposed to play whenever a laser is fired. There was a brief period of time when it worked fine, but it has since stopped. After it stopped I changed the code a bit as I wanted to store the file in a datafile.
Initializing sound in Allegro
install_sound(DIGI_AUTODETECT, MIDI_AUTODETECT, NULL);
This is the code for loading and playing the sound
//Loading sound file from datafile
DATAFILE *laserShot = NULL;
laserShot = load_datafile_object("asteroids.dat", "laser_Shot");
//Error checking
if (laserShot->dat == NULL) {
allegro_message("Error loading laser_Shot.wav");
}
else {
//Playing sound for shot
play_sample((SAMPLE*) laserShot->dat, 255, 127, 1000, 0);
}
//Freeing memory
unload_datafile_object(laserShot);
The sound itself is very short if that is of any importance, less than a second.
The sound would also be trying to play multiple times in quick succession, but there's actually more of a break now than when it was originally working so I don't think that makes a difference.
Is there something I'm getting blatantly wrong?
At first, make sure all parameters are set, which aren't if you call just install_sound. You should also call this:
set_config_int("sound", "quality", 1);
Third parameter refers to used quality of sound. This should mean highest quality, if you want another type, you should search in allegro libs reference.
Second, you should allocate voice. Voice is basically space in memory for playing samples. By default, allegro 4 can allocate 255 different voices, but real number can be far more less because of hardware. You do it like this:
int laser_voice = allocate_voice("sample.wav");
Now you can set parameters, like volume, pan, sweep and playmode. For example, if you want to play looped sample in same frequency and volume like source, you should do this:
voice_set_volume( laser_voice, 200);
voice_set_pan( laser_voice, 127);
voice_set_playmode( laser_voice, PLAYMODE_LOOP);
For other options, you should visit references.
Now, to play sample, you just call
voice_start(laser_voice);
Then you can stop it, replay it, change parameters or change sample by reallocate_voice. That's all. At end of code, you deallocate it by
deallocate_voice(laser_voice);
Turns out I was just making a stupid mistake, I was calling the unloading function in the same function that I was playing the sound file; there was not enough time to play the sound file before the file was unloaded so while technically there was no error to be picked up by the compiler or to cause a crash, the code was trying to play a sound it had already forgotten. Removing the call for unloading allows the sound to play.

FabricJS ApplyFilter is slow on large images

Taking a very basic stock example such as the redify filter, with a large image (1200x1024) I was trying to determine why it takes (what I think) is too long. After some investigating, I find that the delay occurs in fabricjs::ApplyFilter, where replacement.src = canvasEl.toDataURL('image/png'); (line 17933 in 1.6.2). That take a long time, even compared to the complete pixel run through by the filter.
Is there some way around this? Can I do something differently to speed up the process? TIA

Xna Xbox framedrops when GC kicks in

I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!

Resources