How to play more than one stream at once using soundio? - audio

I'm new to soundio. I'm wondering how to play more than one audio source at once.
I'm looking to understand if I'm supposed to create several streams (seems wrong) and let the operating system do the mixing, or am I supposed to implement software mixing?
Software mixing seems tough too if my input sources are operating at different frequencies.
Am I basically asking "how to mix audio"?
I need a little direction.
Here's my use-case:
I have 5 different MP3 files. One is background music and the other 4 are sound effects. I want to start the background music and then play the sound effects as the user does something (such as click a graphical button). (This is for a game)

You can create multiple streams and play them simultaneously. You don't need to do the mixing yourself. Anyway, it needs a lot of work.
Define WAVE_INFO and PLAYBACK_INFO:
struct WAVE_INFO
{
SoundIoFormat format;
std::vector<unsigned char*> data;
int frames; // number of frames in this clip
}
struct PLAYBACK_INFO
{
const WAVE_INFO* wave_info; // information of sound clip
int progress; // number of frames already played
}
Extract WAVE info out of sound clips and store them in an array of WAVE_INFO: std::vector<WAVE_INFO> waves_;. This vector is not going to change after being initialized.
When you want to play waves_[index]:
SoundIoOutStream* outstream = soundio_outstream_create(sound_device_);
outstream->write_callback = write_callback;
PlayBackInfo* playback_info = new PlayBackInfo({&waves_[index], 0});
outstream->format = playback_info->wave_info->format;
outstream->userdata = playback_info;
soundio_outstream_open(outstream);
soundio_outstream_start(outstream);
std::thread stopper([this, outstream]()
{
PlayBackInfo* playback_info = (PlayBackInfo*)outstream->userdata;
while (playback_info->progress != playback_info->wave_info->frames)
{
soundio_wait_events(soundio_);
}
soundio_outstream_destroy(outstream);
delete playback_info;
});
stopper.detach();
In write_callback function:
PlayBackInfo* playback_info = (PlayBackInfo*)outstream->userdata;
int frames_left = playback_info->audio_info->frames - playback_info->progress;
if (frames_left == 0)
{
soundio_wakeup(Window::window_->soundio_);
return;
}
if (frames_left > frame_count_max)
{
frames_left = frame_count_max;
}
// fill the buffer using
// soundio_outstream_begin_write and
// soundio_outstream_end_write by
// data in playback_info->wave_info.data
// considering playback_info->progress.
// update playback_info->progress based on
// number of frames are written to buffer
// for background music:
if (playback_info->audio_info->frames == playback_info->progress)
{
// if application has not exited:
playback_info->progress = 0;
}
Also this solution works, it needs lots of improvements. Please consider it as a POC only.

Related

HoloLens spatial mapping.SpatialSurfaceMesh update problem. C++/winrt

im working on the spatial mapping processing for my HoloLens project.
Somehow calling "SpatialSurfaceMesh::TryComputeLatestMeshAsync" keeps returning the same mesh data overtime.
Is there another process involved updating the observer?
void SpatialMapping::AddOrUpdateSurface(winrt::Windows::Perception::Spatial::SpatialCoordinateSystem const& coordinateSystem)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
SpatialBoundingBox axisAlignedBoundingBox =
{
{ 0.f, 0.f, 0.f },
{ 50.f, 50.f, 50.f },
};
SpatialBoundingVolume bounds = SpatialBoundingVolume::FromBox(coordinateSystem, axisAlignedBoundingBox);
m_surfaceObserver.SetBoundingVolume(bounds);
m_surfaceObserver.ObservedSurfacesChanged(
winrt::Windows::Foundation::TypedEventHandler
<SpatialSurfaceObserver, winrt::Windows::Foundation::IInspectable>
({ this, &SpatialMapping::Observer_ObservedSurfacesChanged })
);
}
void SpatialMapping::Observer_ObservedSurfacesChanged(winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceObserver const& sender
, winrt::Windows::Foundation::IInspectable const& object)
{
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
const auto mapContainingSurfaceCollection = sender.GetObservedSurfaces();
// Process surface adds and updates?.
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
InsertAsync(id, info);
}
}
}
Concurrency::task<void> SpatialMapping::InsertAsync(winrt::guid /*const&*/ id, winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceInfo /*const&*/ newSurfaceInfo)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
return concurrency::create_task([this, id, newSurfaceInfo]
{
const auto surfaceMesh = newSurfaceInfo.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions).get();
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, surfaceMesh);
});
}
Generation works, Update does not
Manuel attempt same problem:
winrt::Windows::Foundation::IAsyncAction SpatialMapping::CollectSurfacesManuel()
{
const auto mapContainingSurfaceCollection = m_surfaceObserver.GetObservedSurfaces();
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
auto mesh{ co_await info.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions) };
{
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, mesh);
}
}
}
MVCE:
Create a New Project with the template
"Holographic DirectX 11 App (UWP) C++/WinRT)"
Add the files:
https://github.com/lpnxDX/HL_MVCE_SpatialSurfaceMeshUpdateProblem.git
Replace m_main in AppView.h
We did some research and have some thoughts about your question right now, let me explain the findings
Has your Observer_ObservedSurfacesChanged method triggered exactly? Adding output statements or breakpoints can help you check it. Since SurfaceObserver should be always available, usually we need to check the availability of surfaceObserver in each frame and recreate a new one when necessary, sample code snippt please see here.
Have you set m_surfaceMeshOptions? It is not visible in the code you posted. If it is missing, you can configure it with the following statement:
surfaceMeshOptions-> IncludeVertexNormals = true;
Microsoft provided the Holographic spatial mapping sample, shows how to acquire spatial mapping data from Windows Perception in real-time. it is similar to your needs, to narrow down issue if it's an issue with your code, please try to check and run this example on your device
If after the above steps, you still can't solve the problem, could you provide an
MVCE, so that we can locate the issue or find a solution? Be careful to remove any privacy-related or other business function codes.
Your code has the following problems:
TryComputeLatestMeshAsync should be called from Observer_ObservedSurfacesChanged not from concurrency::create_task
TryComputeLatestMeshAsync return mesh with matrix, vertices and indices. Indices should be stored to a safe location at the first run, they don't change later. Vertices and matrix should copied as they returns. You shouldn't save the mesh itself, because its data updates from various threads.
ObservedSurfacesChanged shouldn't be called every frame. This is long running function.
Maybe it has something more. I would recommend to start from the sample mentioned earlier.

Detect collision direction in Phaser3

I'm trying to create a 'Bomber-man' like game using Phaser3 library.
For this purpose I'd like to define a collision relationship between the player and the bricks,
and more importantly - detect the collision direction relative the the player.
I've noticed body properties like touching or blocked but they are always set to false. (please see below)
//scene.js
// bricks static group
this.scene.physics.add.staticGroup({ immovable: true });
// player defined in external file (as sprite)
this.player = new Player(this, 90, 90)
// player.js
// ...
this.physics.add.collider(
this,
scene.bricks,
function(player, brick) {
if(player.body.touching.left) { //ALWAYS FALSE!!!
this.isBlockedFromLeft = true;
}, else if(player.body.touching.right) {
this.isBlockedFromRight = true; // ALWAYS FALSE!!!
}
},
null,
this
);
I'd appreciate any help. This is driving me crazy. Maybe there is a better way to do it and i'm missing something...
Thanks in advance.
So I finally figured it out.
The major issue was the way I defined the player movement. It should be
if (this.keyboard.right.isDown) {
this.body.setVelocityX(this.speed);
}
rather than
if (this.keyboard.right.isDown) {
this.x += this.speed;
}
The second way prevent collision detection and the body.touching and body.blocked properties to be updated.
Furthermore, I've found out that when it comes to top-down tiled games, it's really easier to build up the game using the tile-map feature.
official examples can be found here:
https://phaser.io/examples/v3/search?search=map
and here is a tutorial of how to make a tiled-map using a light-weight software called 'Tiled'
https://www.youtube.com/watch?v=2_x1dOvgF1E
Thank you all!

Moving objects collision Phaser [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My collision of moving objects is not working. I have created a demo so you can see it and see my problem all code and everything included.
As you may be able to see, I have 2 blocks coming from left to right and then I have a tank shooting bullets> I tried in all kind of directions and I always get the same results, basically my collision only works the velocity of the blocks value, in the example on the zip file only works up to 300px. Depending on the blocks speed, if I change the block speed to a greater number then the collision works on the same numbers pixels. it is really weird.
I was wondering if I'm just doing the whole thing wrong or what could my problem be? I would appreciate any pointers or ideas that can help me solve this issue. Thanks.
Demo source code.
BasicGame.Game = function (game) {
// When a State is added to Phaser it automatically has the following properties set on it, even if they already exist:
this.game; // a reference to the currently running game (Phaser.Game)
this.add; // used to add sprites, text, groups, etc (Phaser.GameObjectFactory)
this.camera; // a reference to the game camera (Phaser.Camera)
this.cache; // the game cache (Phaser.Cache)
this.input; // the global input manager. You can access this.input.keyboard, this.input.mouse, as well from it. (Phaser.Input)
this.load; // for preloading assets (Phaser.Loader)
this.math; // lots of useful common math operations (Phaser.Math)
this.sound; // the sound manager - add a sound, play one, set-up markers, etc (Phaser.SoundManager)
this.stage; // the game stage (Phaser.Stage)
this.time; // the clock (Phaser.Time)
this.tweens; // the tween manager (Phaser.TweenManager)
this.state; // the state manager (Phaser.StateManager)
this.world; // the game world (Phaser.World)
this.particles; // the particle manager (Phaser.Particles)
this.physics; // the physics manager (Phaser.Physics)
this.rnd; // the repeatable random number generator (Phaser.RandomDataGenerator)
// You can use any of these from any function within this State.
// But do consider them as being 'reserved words', i.e. don't create a property for your own game called "world" or you'll over-write the world reference.
this.bulletTimer = 0;
};
BasicGame.Game.prototype = {
create: function () {
//Enable physics
// Set the physics system
this.game.physics.startSystem(Phaser.Physics.ARCADE);
//End of physics
// Honestly, just about anything could go here. It's YOUR game after all. Eat your heart out!
this.createBullets();
this.createTanque();
this.timerBloques = this.time.events.loop(1500, this.createOnebloque, this);
},
update: function () {
if(this.game.input.activePointer.isDown){
this.fireBullet();
}
this.game.physics.arcade.overlap(this.bullets, this.bloque, this.collisionBulletBloque, null, this);
},
createBullets: function() {
this.bullets = this.game.add.group();
this.bullets.enableBody = true;
this.bullets.physicsBodyType = Phaser.Physics.ARCADE;
this.bullets.createMultiple(100, 'bulletSprite');
this.bullets.setAll('anchor.x', 0.5);
this.bullets.setAll('anchor.y', 1);
this.bullets.setAll('outOfBoundsKill', true);
this.bullets.setAll('checkWorldBounds', true);
},
fireBullet: function(){
if (this.bulletTimer < this.game.time.time) {
this.bulletTimer = this.game.time.time + 1400;
this.bullet = this.bullets.getFirstExists(false);
if (this.bullet) {
this.bullet.reset(this.tanque.x, this.tanque.y - 20);
this.game.physics.arcade.enable(this.bullet);
this.bullet.enableBody = true;
this.bullet.body.velocity.y = -800;
}
}
},
createOnebloque: function(){
this.bloquecs = ["bloqueSprite","bloquelSprite"];
this.bloquesr = this.bloquecs[Math.floor(Math.random()*2)];
this.bloque = this.add.sprite(0, 360, this.bloquesr);
this.game.physics.arcade.enable(this.bloque);
this.bloque.enableBody = true;
this.bloque.body.velocity.x = 300;
this.bloque.body.kinematic = true;
this.bloque.checkWorldBounds = true;
this.bloque.outOfBoundsKill = true;
this.bloque.body.immovable = true;
},
createTanque: function() {
this.tanqueBounds = new Phaser.Rectangle(0, 600, 1024, 150);
this.tanque = this.add.sprite(500, 700, 'tanqueSprite');
this.tanque.inputEnabled = true;
this.tanque.input.enableDrag(true);
this.tanque.anchor.setTo(0.5,0.5);
this.tanque.input.boundsRect = this.tanqueBounds;
},
collisionBulletBloque: function(bullet, bloque) {
this.bullet.kill();
},
quitGame: function (pointer) {
// Here you should destroy anything you no longer need.
// Stop music, delete sprites, purge caches, free resources, all that good stuff.
// Then let's go back to the main menu.
this.state.start('MainMenu');
}
};
The code definitely helps, but you can actually get an idea of what's happening just from playing the game.
As the blocks are going by you can actually get the bullets to disappear if you fire and hit the right edge of the center-most block.
What's happening is that you're checking for a collision between the bullets and a block, but the block is getting redefined every time you add a new one to the screen on the left.
What you probably want to do is create a pool of blocks (bloquePool for example), almost exactly like what you're doing in createBullets with your bullet pool (what you've called bullets).
Then check for a collision between the group of bullets and the group of blocks.
this.game.physics.arcade.overlap(
this.bullets, this.bloquePool, this.collisionBulletBloque, null, this
);
You should get the individual bullet and the individual block that collided passed into collisionBulletBloque so you can still kill the bullet as you are.

(Leadtools) Rastercodecs.load is thread safe?

I want to convert a PDF to images. I am using Leadtools and to increase the speed, I am using multi-threading in the following way.
string multiPagePDF = #"Manual.pdf";
string destFileName = #"output\Manual";
Task.Factory.StartNew(() =>
{
using (RasterCodecs codecs = new RasterCodecs())
{
CodecsImageInfo info = codecs.GetInformation(multiPagePDF, true);
ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 5;
Parallel.For(1, multiPagePDF.TotalPages+1, po, i =>
{
RasterImage image = codecs.Load(multiPagePDF, i);
codecs.Save(image, destFileName + i + ".png", RasterImageFormat.Png, 0);
});
}
});
Is this a thread-safe manner? Will it result in unexpected output?
I tried this a few times and there were instances when a specific page appeared twice in the output images.
Solution
According to Leadtools online chat support (which is very helpful btw), Rastercodecs.load is NOT thread safe and the above code would result in unexpected output (in my case, Page 1 occurred twice in the output set of images). The solution is to define codecs variable within the Parallel.For so that each iteration separately accesses its own RasterCodecs.
Amyn,
As you found out, the correct way to use the RasterCodecs object in this case is this:
Task.Factory.StartNew(() =>
{
using (RasterCodecs codecs = new RasterCodecs())
{
CodecsImageInfo info = codecs.GetInformation(multiPagePDF, true);
ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 5;
Parallel.For(1, info.TotalPages + 1, po, i =>
{
using(RasterCodecs codecs2 = new RasterCodecs()) {
RasterImage image = codecs2.Load(multiPagePDF, i);
codecs2.Save(image, destFileName + i + ".png", RasterImageFormat.Png, 0);
}
});
}
});
This gives you the same speed benefits when running on a multi-core processor without causing any conflicts between concurrent threads.
The LEADTOOLS RasterCodecs.Load() and RasterCodecs.Save() methods are thread-safe. The reason behind creating multiple instances of the RasterCodecs class is because this class internally uses structures that hold many different loading & saving options for files. Using these structures (where these options are changed) across multiple threads can cause unpredictable results. One such property in the loading options structure is the page number. For this reason, using separate instances of this class is recommended.

How to get direct show fillter?

i am recording video from webcam using DirectshowLib2005.dll in C#.net..i have this code to startVideoRecoding as below..
try
{
IBaseFilter capFilter = null;
IBaseFilter asfWriter = null;
IFileSinkFilter pTmpSink = null;
ICaptureGraphBuilder2 captureGraph = null;
GetVideoDevice();
if (availableVideoInputDevices.Count > 0)
{
//
//init capture graph
//
graphBuilder = (IFilterGraph2)new FilterGraph();
captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
//
//sets filter object from graph
//
captureGraph.SetFiltergraph(graphBuilder);
//
//which device will use graph setting
//
graphBuilder.AddSourceFilterForMoniker(AvailableVideoInputDevices.First().Mon, null, AvailableVideoInputDevices.First().Name, out capFilter);
captureDeviceName = AvailableVideoInputDevices.First().Name;
//
//check saving path is exsist or not;if not then create
//
if (!Directory.Exists(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\"))
{
Directory.CreateDirectory(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\");
}
#region WMV
//
//sets output file name,and file type
//
captureGraph.SetOutputFileName(MediaSubType.Asf, ConstantHelper.RootDirectoryName + "\\Assets\\Video\\" + videoFilename + ".wmv", out asfWriter, out pTmpSink);
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
#endregion
//
//render the stram to output file using graph setting
//
captureGraph.RenderStream(null, null, capFilter, null, asfWriter);
m_mediaCtrl = graphBuilder as IMediaControl;
m_mediaCtrl.Run();
isVideoRecordingStarted = true;
VideoStarted(m_mediaCtrl, null);
}
else
{
isVideoRecordingStarted = false;
}
}
catch (Exception Ex)
{
ErrorLogging.WriteErrorLog(Ex);
}
if you observe this lines of code
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
it will apply video setting which is described on that GUID i got this GUID from file located at "C:\windows\WMSysPr9.prx"..
so my question is how create my own video setting with format,resolutions and all?
How to Record video using webcam in black and white mode or in grayscale?
so my question is how create my own video setting with format,resolutions and all?
GUID based profiles are deprecated, though you can still use them. You can build custom profile in code using WMCreateProfileManager and friends (you start with empty profile and add video and/or audio streams at your discretion). This is C++ API, and I suppose that WindowsMedia.NET, a sister project to DirectShowLib you are already using, provides you interface into .NET code.
Windows SDK WMGenProfile sample both shows how to build profile manually and provides you a tool to build it interactively and save into .PRX file you can use in your application.
$(WindowsSDK)\Samples\multimedia\windowsmediaformat\wmgenprofile
How to Record video using webcam in black and white mode or in grayscale?
The camera gives you a picture, then it goes through pipeline up to recording through certain processing. Ability to make it greyscale is not something inherent.
There are two things you might want to think of. First of all, if the camera is capable of stripping color information on capture, you can leverage this. Check it out - if its settings have Saturation slider, then you just put it input minimal value position and the camera gives you greyscale.
In code, you use IAMVideoProcAmp interface for this.
Another option, including if the camera is missing mentioned capability, is to apply post processing filter or effect that converts to greyscale. There is no stock solution for this, and otherwise there are several ways to achieve the effect:
use third party filter that strips color
export from DirectShow pipeline, convert data in code using Color Control Transform DSP (available starting Win Vista) or GDI functions
use Sample Grabber in the streaming pipeline and update image bits directly

Resources