Realtime 3D Audio Streaming and Playback - audio

Currently, I am working on streaming audio to Unity over network, I successfully integrated media library (GStreamer) with Unity and I was able to play the audio inside the environment using audio filter callback function attached to AudioSource:
void OnAudioFilterRead(float[] data, int channels)
{
// fill data array with the streamed audio data
//....
}
The previous function provided 2D audio playback with very low latency,
In my application I want to render the audio in 3D Spatial space, so the audio rendering will be dependent on camera's (Audio listener) orientation.
I tried to stream audio data into AudioClip using the following:
AudioClip TargetClip;
public AudioSource TargetSrc;
void Start()
{
int freq=32000; //streamed audio sampling rate
TargetClip = AudioClip.Create ("test_Clip", freq, 1, freq, true,true, OnAudioRead,OnAudioSetPosition);
TargetSrc.clip = TargetClip;
TargetSrc.Play ();
}
void OnAudioRead(float[] data) {
// fill data array with the streamed audio data
//....
}
void OnAudioSetPosition(int newPosition) {
}
When I played the audio, the audio was rendered as I wanted in 3D spatial space, however there was a huge latency (more than 2 seconds).
Is there any way to solve the latency problem?

I figured out how to solve this issue.
For those facing similar problem, actually the method I was using in the first place was not efficient by filling the AudioClip data. I was supposed to use OnAudioFilterRead() in either cases, however it should be using as following for spatial audio calculations:
public AudioSource TargetSrc;
void Start()
{
var dummy = AudioClip.Create ("dummy", 1, 1, AudioSettings.outputSampleRate, false);
dummy.SetData(new float[] { 1 }, 0);
TargetSrc.clip = dummy; //just to let unity play the audiosource
TargetSrc.loop = true;
TargetSrc.spatialBlend=1;
TargetSrc.Play ();
}
void OnAudioFilterRead(float[] data, int channels)
{
// "data" contains the weights of spatial calculations ready by unity
// multiply "data" with streamed audio data
for(int i=0;i<data.Length;++i)
{
data[i]*=internal_getSample(i);
}
}
Hope this would help some else.

Related

cHow desactivate sound in my App with 1 button?

I have some mp3 audios in some scenes, and I have 1 button in my main scene and I want when user press these button, my App had no mp3 sounds. Do I Have to duplicate my App without sounds? or how?
I have this but not works?
public class AudioApp : MonoBehaviour {
public void SonidoApp() {
PlayerPrefs.SetInt("MP3AudioState", 1);
}
public void SonidoAppmute() {
PlayerPrefs.SetInt("MP3AudioState", 0);
}
}
What you can do is when the user press the button you save a value in PlayerPrefs (0 for unmuted 1 for muted as an exemple):
PlayerPrefs.SetInt("MP3AudioState", 1);
And wherever you play the audio sound all you need to check is the value you saved:
if(PlayerPrefs.GetInt("MP3AudioState") == 0){ //Play the sound}
So if the value you saved was 0 at the time the sound wants to be played you let it pass if it's 1 you don't.
To disable all Audios in your scene, simply change the volume of the current AudioListener to zero.
The AudioListener is attached to your camera by default, so this could work for you:
Camera.main.GetComponent<AudioListener>.volume = 0;

Unity3D Play sound from particle

I am trying to play a sound when a particle collides with a wall. Right now, it just plays the sound from the parent object, which is the player.
However, I want the sound to play from the particle. Which means when a particle is far to the left, you vaguely hear the sound coming from the left.
Is there a way to play the sound from a particle, when it collides?
You can use OnParticleCollision and the ParticlePhysicsExtensions, and play a sound with PlayClipAtPoint:
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(ParticleSystem))]
public class CollidingParticles : MonoBehaviour {
public AudioClip collisionSFX;
ParticleSystem partSystem;
ParticleCollisionEvent[] collisionEvents;
void Awake () {
partSystem = GetComponent<ParticleSystem>();
collisionEvents = new ParticleCollisionEvent[16];
}
void OnParticleCollision (GameObject other) {
int safeLength = partSystem.GetSafeCollisionEventSize();
if (collisionEvents.Length < safeLength)
collisionEvents = new ParticleCollisionEvent[safeLength];
int totalCollisions = partSystem.GetCollisionEvents(other, collisionEvents);
for (int i = 0; i < totalCollisions; i++)
AudioSource.PlayClipAtPoint(collisionSFX, collisionEvents[i].intersection);
print (totalCollisions);
}
}
The problem is that the temporary AudioSource created by PlayClipAtPoint cannot be retrieved (to set it as 3D sound). So you are better off creating your own PlayClipAtPoint method that instantiates a prefab, already configured with a 3D AudioSource and the clip you want, and run Destroy(instance, seconds) to mark it for timed destruction.
The only way I can imagine is overriding animation of particle system via GetParticles/SetParticles. Thus you can provide your own collision detection for particles with Physics.RaycastAll and play sound when collisions occured.
AudioSource audioSourcee;
public float timerToPlay;
float timerToSave;
void Start()
{
timerToSave = timerToPlay;
}
void OnEnable()
{
timerToPlay = timerToSave;
}
// Update is called once per frame
void Update()
{
if(timerToPlay>0)
timerToPlay -= Time.deltaTime;
if(timerToPlay<=0)
audioSourcee.Play();
}

How to reverse audio wave using processing

Is there a way to analyize the audio recorded by the application and reverse its wave? for example in Analog Audio the wave of sound is like a sinwave either 0,1,-1. I want to reverse that so that 1 will be -1 and the -1 be 1. How to do that using processing software?
Nikos is correct that the operation you are looking for is called Invert and not reverse. This achieved simply by multiplying every sample by -1.
The best way to do this is to use Minim, processing's audio library. You can extend the UGen class in order to make a new effects processor that flips every sample that goes through it. I've included an example below that works with a sine wave. You can change this around to be some other audio source and to draw it however you like.
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
void setup()
{
size(300, 200, P2D);
minim = new Minim(this);
out = minim.getLineOut();
Oscil osc;
Invert inv;
Constant cutoff;
// initialize the oscillator
// (a sawtooth wave has energy across the spectrum)
osc = new Oscil(500, 0.2, Waves.SINE);
inv = new Invert();
osc.patch(inv).patch(out);
}
void draw()
{
background( 0 );
}
public class Invert extends UGen{
public UGenInput audio;
Invert()
{
audio = new UGenInput(InputType.AUDIO);
}
#Override
protected void uGenerate (float[] channels)
{
if ( audio.isPatched() )
{
for (int i = 0; i < channels.length; i++){
// this is where we multiple each sample by -1
channels[i] = audio.getLastValues()[i] * -1;
}
}
}
}

Hooking IDirect3DDevice9::EndScene method to capture a gameplay video: can not get rid of a text overlay in the recorded video

In fact it is a wild mix of technologies, but the answer to my question (I think) is closest to Direct3D 9. I am hooking to an arbitrary D3D9 applications, in most cases it is a game, and injecting my own code to mofify the behavior of the EndScene function. The backbuffer is copied into a surface which is set to point to a bitmap in a push source DirectShow filter. The filter samples the bitmaps at 25 fps and streams the video into an .avi file. There is a text overlay shown across the game's screnn telling the user about a hot key combination that is supposed to stop gameplay capture, but this overlay is not supposed to show up in the recoreded video. Everything works fast and beautiful except for one annoying fact. On a random occasion, a frame with the text overaly makes its way into the recoreded video. This is not a really desired artefact, the end user only wants to see his gameplay in the video and nothing else. I'd love to hear if anyone can share ideas of why this is happening. Here is the source code for the EndScene hook:
using System;
using SlimDX;
using SlimDX.Direct3D9;
using System.Diagnostics;
using DirectShowLib;
using System.Runtime.InteropServices;
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
[System.Security.SuppressUnmanagedCodeSecurity]
[Guid("EA2829B9-F644-4341-B3CF-82FF92FD7C20")]
public interface IScene
{
unsafe int PassMemoryPtr(void* ptr, bool noheaders);
int SetBITMAPINFO([MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]byte[] ptr, bool noheaders);
}
public class Class1
{
object _lockRenderTarget = new object();
public string StatusMess { get; set; }
Surface _renderTarget;
//points to image bytes
unsafe void* bytesptr;
//used to store headers AND image bytes
byte[] bytes;
IFilterGraph2 ifg2;
ICaptureGraphBuilder2 icgb2;
IBaseFilter push;
IBaseFilter compressor;
IScene scene;
IBaseFilter mux;
IFileSinkFilter sink;
IMediaControl media;
bool NeedRunGraphInit = true;
bool NeedRunGraphClean = true;
DataStream s;
DataRectangle dr;
unsafe int EndSceneHook(IntPtr devicePtr)
{
int hr;
using (Device device = Device.FromPointer(devicePtr))
{
try
{
lock (_lockRenderTarget)
{
bool TimeToGrabFrame = false;
//....
//logic based on elapsed milliseconds deciding if it is time to grab another frame
if (TimeToGrabFrame)
{
//First ensure we have a Surface to render target data into
//called only once
if (_renderTarget == null)
{
//Create offscreen surface to use as copy of render target data
using (SwapChain sc = device.GetSwapChain(0))
{
//Att: created in system memory, not in video memory
_renderTarget = Surface.CreateOffscreenPlain(device, sc.PresentParameters.BackBufferWidth, sc.PresentParameters.BackBufferHeight, sc.PresentParameters.BackBufferFormat, Pool.SystemMemory);
} //end using
} // end if
using (Surface backBuffer = device.GetBackBuffer(0, 0))
{
//The following line is where main action takes place:
//Direct3D 9 back buffer gets copied to Surface _renderTarget,
//which has been connected by references to DirectShow's
//bitmap capture filter
//Inside the filter ( code not shown in this listing) the bitmap is periodically
//scanned to create a streaming video.
device.GetRenderTargetData(backBuffer, _renderTarget);
if (NeedRunGraphInit) //ran only once
{
ifg2 = (IFilterGraph2)new FilterGraph();
icgb2 = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
icgb2.SetFiltergraph(ifg2);
push = (IBaseFilter) new PushSourceFilter();
scene = (IScene)push;
//this way we get bitmapfile and bitmapinfo headers
//ToStream is slow, but run it only once to get the headers
s = Surface.ToStream(_renderTarget, ImageFileFormat.Bmp);
bytes = new byte[s.Length];
s.Read(bytes, 0, (int)s.Length);
hr = scene.SetBITMAPINFO(bytes, false);
//we just supplied the header to the PushSource
//filter. Let's pass reference to
//just image bytes from LockRectangle
dr = _renderTarget.LockRectangle(LockFlags.None);
s = dr.Data;
Result r = _renderTarget.UnlockRectangle();
bytesptr = s.DataPointer.ToPointer();
hr = scene.PassMemoryPtr(bytesptr, true);
//continue building graph
ifg2.AddFilter(push, "MyPushSource");
icgb2.SetOutputFileName(MediaSubType.Avi, "C:\foo.avi", out mux, out sink);
icgb2.RenderStream(null, null, push, null, mux);
media = (IMediaControl)ifg2;
media.Run();
NeedRunGraphInit = false;
NeedRunGraphClean = true;
StatusMess = "now capturing, press shift-F11 to stop";
} //end if
} // end using backbuffer
} // end if Time to grab frame
} //end lock
} // end try
//It is usually thrown when the user makes game window inactive
//or it is thrown deliberately when time is up, or the user pressed F11 and
//it resulted in stopping a capture.
//If it is thrown for another reason, it is still a good
//idea to stop recording and free the graph
catch (Exception ex)
{
//..
//stop the DirectShow graph and cleanup
} // end catch
//draw overlay
using (SlimDX.Direct3D9.Font font = new SlimDX.Direct3D9.Font(device, new System.Drawing.Font("Times New Roman", 26.0f, FontStyle.Bold)))
{
font.DrawString(null, StatusMess, 20, 100, System.Drawing.Color.FromArgb(255, 255, 255, 255));
}
return device.EndScene().Code;
} // end using device
} //end EndSceneHook
As it happens sometimes, I finally found an answer to this question myself, if anyone is interested. It turned out that backbuffer in some Direct3D9 apps is not necessarily refreshed each time the hooked EndScene is called. Hence, occasionally the backbuffer with the text overlay from the previous EndScene hook call was passed to the DirectShow source filter responsible for collecting input frames. I started stamping each frame with a tiny 3 pixel overlay with known RGB values and checking if this dummy overlay was still present before passing the frame to the DirectShow filter. If the overlay was there, the previously cached frame was passed instead of the current one. This approach effectively removed the text overlay from the video recorded in the DirectShow graph.

Simplest way to capture raw audio from audio input for real time processing on a mac

What is the simplest way to capture audio from the built in audio input and be able to read the raw sampled values (as in a .wav) in real time as they come in when requested, like reading from a socket.
Hopefully code that uses one of Apple's frameworks (Audio Queues). Documentation is not very clear, and what I need is very basic.
Try the AudioQueue Framework for this. You mainly have to perform 3 steps:
setup an audio format how to sample the incoming analog audio
start a new recording AudioQueue with AudioQueueNewInput()
Register a callback routine which handles the incoming audio data packages
In step 3 you have a chance to analyze the incoming audio data with AudioQueueGetProperty()
It's roughly like this:
static void HandleAudioCallback (void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc) {
// Here you examine your audio data
}
static void StartRecording() {
// now let's start the recording
AudioQueueNewInput (&aqData.mDataFormat, // The sampling format how to record
HandleAudioCallback, // Your callback routine
&aqData, // e.g. AudioStreamBasicDescription
NULL,
kCFRunLoopCommonModes,
0,
&aqData.mQueue); // Your fresh created AudioQueue
AudioQueueStart(aqData.mQueue,
NULL);
}
I suggest the Apple AudioQueue Services Programming Guide for detailled information about how to start and stop the AudioQueue and how to setup correctly all ther required objects.
You may also have a closer look into Apple's demo prog SpeakHere. But this is IMHO a bit confusing to start with.
It depends how ' real-time ' you need it
if you need it very crisp, go down right at the bottom level and use audio units. that means setting up an INPUT callback. remember, when this fires you need to allocate your own buffers and then request the audio from the microphone.
ie don't get fooled by the presence of a buffer pointer in the parameters... it is only there because Apple are using the same function declaration for the input and render callbacks.
here is a paste out of one of my projects:
OSStatus dataArrivedFromMic(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * dummy_notused )
{
OSStatus status;
RemoteIOAudioUnit* unitClass = (RemoteIOAudioUnit *)inRefCon;
AudioComponentInstance myUnit = unitClass.myAudioUnit;
AudioBufferList ioData;
{
int kNumChannels = 1; // one channel...
enum {
kMono = 1,
kStereo = 2
};
ioData.mNumberBuffers = kNumChannels;
for (int i = 0; i < kNumChannels; i++)
{
int bytesNeeded = inNumberFrames * sizeof( Float32 );
ioData.mBuffers[i].mNumberChannels = kMono;
ioData.mBuffers[i].mDataByteSize = bytesNeeded;
ioData.mBuffers[i].mData = malloc( bytesNeeded );
}
}
// actually GET the data that arrived
status = AudioUnitRender( (void *)myUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
& ioData );
// take MONO from mic
const int channel = 0;
Float32 * outBuffer = (Float32 *) ioData.mBuffers[channel].mData;
// get a handle to our game object
static KPRing* kpRing = nil;
if ( ! kpRing )
{
//AppDelegate * appDelegate = [UIApplication sharedApplication].delegate;
kpRing = [Game singleton].kpRing;
assert( kpRing );
}
// ... and send it the data we just got from the mic
[ kpRing floatsArrivedFromMic: outBuffer
count: inNumberFrames ];
return status;
}

Resources