I am trying to create an app that would allow the user some sounds and then use this in a playback fashion.
I would like to have my application play a .wav file that the user will record.
I am having trouble figuring out how to code this, as I keep getting a error.
==== JavaSound Minim Error ====
==== Error invoking createInput on the file loader object: null
Snippet of code:
import ddf.minim.*;
AudioInput in;
AudioRecorder recorder;
RadioButtons r;
boolean showGUI = false;
color bgCol = color(0);
Minim minim;
//Recording players
AudioPlayer player1;
AudioPlayer player2;
void newFile()
{
countname =(name+1);
recorder = minim.createRecorder(in, "data/" + countname + ".wav", true);
}
......
void setup(){
minim = new Minim(this);
in = minim.getLineIn(Minim.MONO, 2048);
newFile();
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
void draw() {
// Draw the image to the screen at coordinate (0,0)
image(img,0,0);
//recording button
if(r.get() == 0)
{
for(int i = 0; i < in.left.size()-1; i++)
}
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
//play button
if(r.get() == 1)
{
if(mousePressed){
.......
player_1.cue(0);
player_1.play();
}
if(mousePressed){
.......
player_2.cue(0);
player_2.play();
}
}
The place where I have a problem is here:
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
The files that will be recorded will be 1.wav, 2.wav. But I can not place this in the
player1.minim.loadFile ("1.wav");
player2.mminim.loadFile("2.wav");
How would I do this?
As indicated in the JavaDoc page for AudioRecorder [1], calls to beginRecord(), endRecord() and save() will need to happen so that whatever you want to record is actually recorded and then also saved to disk. As long as that does not happen there is nothing for loadFile() to load and you will therefore receive errors. So the problem lies in your program flow. Only when your program reaches a state where a file has already been recorded and saved, you can actually load that.
There are probably also ways for you to play back whatever is being recorded right at the moment it arrives in your audio input buffer (one would usually refer to such as 'monitoring'), but as i understand it, that is not what you want.
Aside this general conceptual flaw there also seem to be other problems in your code, e.g. countname is not being iterated between two subsequent loadFile calls (I assume that it should be iterated though); Also at some point you have "player_1.play();" (note the underscore), although you're probably refering to this, differently written variable earlier initialized with "player1 = minim.loadFile(...)" ? ...
[1] http://code.compartmental.net/minim/javadoc/ddf/minim/AudioRecorder.html
This is the approach to record from an audio file into an AudioRecorder object. You load a file, play it and then you choose what section to save into another file that you can play using and AudioPlayer object or your favorite sound player offered by your OS.
Related to
I am having trouble figuring out how to code this, as I keep getting a
error.
Despite it says it is an error, it doesn't affect executing your program. I would consider this a warning and ignore it. If you want to fix it, I believe you will need to edit the file's tags to properly set their values.
INSTRUCTIONS: In the code, define your file to play. When you run the sketch, press r to begin recording, r again to stop recording. Don't forget to press s to save the file to an audio file which will be located in the data folder.
NOTE: If you need to play wav files, you will need a Sampler object instead of a FilePlayer one.
//REFERENCE: https:// forum.processing.org/one/topic/how-can-i-detect-sound-with-my-mic-in-my-computer.html
//REFERENCE: https:// forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim
/**
* This sketch demonstrates how to use an <code>AudioRecorder</code> to record audio to disk.
* Press 'r' to toggle recording on and off and the press 's' to save to disk.
* The recorded file will be placed in the sketch folder of the sketch.
* <p>
* For more information about Minim and additional features,
* visit http://code.compartmental.net/minim/" target="_blank" rel="nofollow">http://code.compartmental.net/minim/</a>;
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
FilePlayer player;
AudioOutput out;
AudioRecorder recorder;
void setup()
{
size(512, 200, P3D);
textFont(createFont("Arial", 12));
minim = new Minim(this);
player = new FilePlayer(minim.loadFileStream("energeticDJ.mp3"));
// IT DOESN'T WORK FOR WAV files ====> player = new FilePlayer(minim.loadFileStream("fair1939.wav"));
out = minim.getLineOut();
TickRate rateControl = new TickRate(1.f);
player.patch(rateControl).patch(out);
recorder = minim.createRecorder(out, dataPath("myrecording.wav"),true);
player.loop(0);
}
void draw()
{
background(0);
stroke(255);
// draw a line to show where in the song playback is currently located
float posx = map(player.position(), 0, player.length(), 0, width);
stroke(0, 200, 0);
line(posx, 0, posx, height);
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
} else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
} else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println("Done saving.");
}
}
Related
I want to receive a message from xserver when I close my window.
(when I hit the 'X' button')
For example, I have a list of windows that are currently opened. (can be referred by dayRecord.)
I want to print that "window closed!" when I click the x button on a terminal.
But though I clicked it, I couldn't get any message from xserver.
Also, that XNextEvent is blocking.
I've test this exact same logic with a window generated in my code using XCreateSimpleWindow,
and it worked perfectly.
I can even close an existing window I created manualy by XDestroyWindow.
So, I think there is no difference between a window that is created by the code
and created before the code starts once if I have a window id.
But somehow, I cannot get any message from a former one.
This is my code.
It is trying to get a list of windows that are opened.
And within them, when I close a terminal it should print "window closed!"
void windowManager() {
// getting a text viewer to be traced
char **textViewers = getTextViewers();
// retreiving new record of a day
DayRecord *dayRecord = getNewDayRecord();
// save an x11 display, and currently opened windows into the day record
Display *display = recordInit(dayRecord);
Window whatToClose;
for (int i = 0; i < dayRecord->recordCnt; ++i) {
printf("currently opened: %s %lu\n", dayRecord->record[i].name, dayRecord->record[i].window);
// for example: i want to know when a terminal is closed. (before the program starts, it should be opened, or it crashes.)
if (strcmp(dayRecord->record[i].name, "gnome-terminal-server") == 0)
whatToClose = dayRecord->record[i].window;
addInfoSpace(display, dayRecord->record + i, textViewers);
}
printf("let's close %lu: %s\n", whatToClose, getWindowName(display, whatToClose));
// window close detecting logic down here...
Atom wmDelete = XInternAtom(display, "WM_DELETE_WINDOW", False);
XSetWMProtocols(display, whatToClose, &wmDelete, 1);
XEvent xEvent;
printf("I am listening!\n");
while (True) {
XNextEvent(display, &xEvent);
if (xEvent.type == ClientMessage && xEvent.xclient.data.l[0] == wmDelete) {
printf("window closed!\n");
break;
}
}
char *json = (char *)malloc(sizeof(char) * 100000);
dayRecordToJSON(dayRecord, json);
printf("%s", json);
// freeing memories
free2dArray((void **)textViewers, MAX_FILE_COUNT);
freeDayRecord(dayRecord);
XCloseDisplay(display);
}
Thank you.
I've tried many things and this worked.
If you're trying to do the same thing, please check the code.
But, I am not sure if this is a right way to do.
Maybe there can be a safer and right way to do this but this works anyway.
// ...
Atom wmDelete = XInternAtom(display, "WM_DELETE_WINDOW", False);
XSetWMProtocols(display, whatToClose, &wmDelete, 1);
XSelectInput(display, whatToClose, SubstructureNotifyMask);
XEvent xEvent;
printf("I am listening!\n");
while (1) {
XNextEvent(display, &xEvent);
if (xEvent.type == DestroyNotify) {
printf("window closed!\n");
break;
}
}
// ...
I have a webcam feed in my processing sketch and i can record and save the video. What i wanna accomplish is that when i go to the next case (drawScreenOne) that the video i just recorded will show up on the canvas. The problem that i have now, is that when i save the video, with the video export library from com.hamoid, it gets saved in the same folder as my sketch, but to play a movie it needs to be in the data folder. So i can't play the movies without it manually moving to the data folder. Can you do that from within processing?
And how can i load up the videos that i just created in a case before? Do i need to use an array for that? I can play the movies when i manually move it to the data folder but i want processing to handle that.
this is the code i have so far:
import com.hamoid.*;
import processing.video.*;
import ddf.minim.*;
Minim minim;
AudioInput in;
AudioRecorder recorder;
Movie myMovie;
Movie myMovie1;
int currentScreen;
VideoExport videoExport;
boolean recording = false;
Capture theCap;
Capture cam;
int i = 0;
int countname; //change the name
int name = 000000; //set the number in key's' function
// change the file name
void newFile()
{
countname =( name + 1);
recorder = minim.createRecorder(in, "file/Sound" + countname + ".wav", true);
// println("file/" + countname + ".wav");
}
void setup() {
size(500,500);
frameRate(30);
noStroke();
smooth();
myMovie = new Movie(this, "video0.mp4");
myMovie.loop();
myMovie1 = new Movie(this, "video1.mp4");
myMovie1.loop();
String[] cameras = Capture.list();
if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}
// The camera can be initialized directly using an
// element from the array returned by list():
//cam = new Capture(this, cameras[3]); //built in mac cam "isight"
cam = new Capture(this, 1280, 960, "USB-camera"); //externe camera Lex, linker USB
cam.start();
}
println("Druk op R om geluid en video op te nemen.Druk nog een keer op R om het opnemen te stoppen en druk op S om het op te slaan Druk vervolgens op Z om verder te gaan.");
videoExport = new VideoExport(this, "video" + i + ".mp4");
minim = new Minim(this);
// get a stereo line-in: sample buffer length of 2048
// default sample rate is 44100, default bit depth is 16
in = minim.getLineIn(Minim.STEREO, 2048);
// create a recorder that will record from the input to the filename specified, using buffered recording
// buffered recording means that all captured audio will be written into a sample buffer
// then when save() is called, the contents of the buffer will actually be written to a file
// the file will be located in the sketch's root folder.
newFile();//go to change file name
textFont(createFont("SanSerif", 12));
}
void draw() {
switch(currentScreen){
case 0: drawScreenZero(); break; //camera
case 1: drawScreenOne(); break; //1 video
case 2: drawScreenZero(); break; //camera
case 3: drawScreenTwo(); break; // 2 video's
case 4: drawScreenZero(); break; //camera
case 5: drawScreenThree(); break; //3 video's
case 6: drawScreenZero(); break; //camera
case 7: drawScreenFour(); break; //4 video's
default: background(0); break;
}
}
void mousePressed() {
currentScreen++;
if (currentScreen > 2) { currentScreen = 0; }
}
void drawScreenZero() {
println("drawScreenZero camera");
if (cam.available() == true) {
cam.read();
}
image(cam, 0,0,width, height);
// The following does the same, and is faster when just drawing the image
// without any additional resizing, transformations, or tint.
//set(0, 0, cam);
if (recording) {
videoExport.saveFrame();
}
for(int i = 0; i < in.bufferSize() - 1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
if ( recorder.isRecording() )
{
text("Aan het opnemen...", 5, 15);
text("Druk op R als je klaar bent met opnemen en druk op S om het op te slaan.", 5, 30);
}
else
{
text("Gestopt met opnemen. Druk op R om op te nemen, druk op S om op te slaan.", 5, 15);
}
}
void drawScreenOne() {
background(0,255,0);
//fill(0);
//rect(250,40,250,400);
println("drawScreenOne 1 video");
image(myMovie, 0,0, (width/2),(height/2));
}
void drawScreenTwo(){
background(0,0,255);
println("drawScreenTwo 2 videos");
//triangle(150,100,150,400,450,250);
image(myMovie, 0,0, (width/2),(height/2));
image(myMovie1, (width/2),(height/2),(width/2),(height/2));
}
void drawScreenThree(){
//fill(0);
//rect(250,40,250,400);
background(255,0,0);
println("drawScreenThree 3 videos");
image(myMovie, 0,0, (width/2),(height/2));
image(myMovie1, (width/2),(height/2),(width/2),(height/2));
image(myMovie, (width/2),0, (width/2),(height/2));
}
void drawScreenFour(){
//triangle(150,100,150,400,450,250);
background(0,0,255);
println("drawScreenFour 4 videos");
image(myMovie, 0,0, (width/2),(height/2));
image(myMovie1, (width/2),(height/2),(width/2),(height/2));
image(myMovie, (width/2),0, (width/2),(height/2));
image(myMovie1, 0,(height/2),(width/2),(height/2));
}
void keyPressed() {
if (key == 'r' || key == 'R') {
recording = !recording;
println("Recording is " + (recording ? "ON" : "OFF"));
} else if (key == 's' || key == 's') {
i++;
videoExport = new VideoExport(this, "video" + i + ".mp4");
currentScreen++;
if (currentScreen > 7) { currentScreen = 0; }
}
}
void movieEvent(Movie m) {
m.read();
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
/*#######################################*/
newFile();
/*#######################################*/
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
name++; //change the file name, everytime +1
recorder.save();
println("Done saving.");
println(name);//check the name
}
}
void stop()
{
// always close Minim audio classes when you are done with them
in.close();
minim.stop();
super.stop();
}
Can you do that from within processing?
Sure. Just google something like "Java move file" and I'm sure you'll find a ton of results. Or you could just save the video to the data directory in the first place. I've never used the VideoExport class so this is just a guess, but I'd imagine that this would put the video in the data directory:
videoExport = new VideoExport(this, "data/video" + i + ".mp4");
And how can i load up the videos that i just created in a case before? Do i need to use an array for that?
I'm not sure I understand this question, but you can use any variable you want. Just keep track of where the files are going, and then load them from there.
I want to make a soundboard in the Processing language that plays sounds so the computer handles the sounds as if they were inputs from my microphone. This is my only problem about doing a soundboard. How do I make the sounds play as if they were recorded by the microphone?
I have spent an hour searching and trying to get help, but I have nothing to work with.
Minim provides the class AudioInput for monitoring the user’s current record source (this is often set in the sound card control panel), such as the microphone or the line-in
from
http://code.compartmental.net/tools/minim/quickstart/
EDIT:
Have you seen this?
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
// for recording
AudioInput in;
AudioRecorder recorder;
// for playing back
AudioOutput out;
FilePlayer player;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// get a stereo line-in: sample buffer length of 2048
// default sample rate is 44100, default bit depth is 16
in = minim.getLineIn(Minim.STEREO, 2048);
// create an AudioRecorder that will record from in to the filename specified.
// the file will be located in the sketch's main folder.
recorder = minim.createRecorder(in, "myrecording.wav");
// get an output we can playback the recording on
out = minim.getLineOut( Minim.STEREO );
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms
// the values returned by left.get() and right.get() will be between -1 and 1,
// so we need to scale them up to see the waveform
for(int i = 0; i < in.left.size()-1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
if ( recorder.isRecording() )
{
text("Now recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data,
// you must callstartRecording() and stopRecording() on the AudioRecorder object.
// You can start and stop as many times as you like, the audio data will
// be appended to the end of to the end of the file.
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to a file of the type we specified in setup
// in the case of buffered recording,
// this will appear to freeze the sketch for sometime, if the buffer is large
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// save returns the recorded audio in an AudioRecordingStream,
// which we can then play with a FilePlayer
if ( player != null )
{
player.unpatch( out );
player.close();
}
player = new FilePlayer( recorder.save() );
player.patch( out );
player.play();
}
}
It's from here:
http://code.compartmental.net/minim/audiorecorder_class_audiorecorder.html
In fact it is a wild mix of technologies, but the answer to my question (I think) is closest to Direct3D 9. I am hooking to an arbitrary D3D9 applications, in most cases it is a game, and injecting my own code to mofify the behavior of the EndScene function. The backbuffer is copied into a surface which is set to point to a bitmap in a push source DirectShow filter. The filter samples the bitmaps at 25 fps and streams the video into an .avi file. There is a text overlay shown across the game's screnn telling the user about a hot key combination that is supposed to stop gameplay capture, but this overlay is not supposed to show up in the recoreded video. Everything works fast and beautiful except for one annoying fact. On a random occasion, a frame with the text overaly makes its way into the recoreded video. This is not a really desired artefact, the end user only wants to see his gameplay in the video and nothing else. I'd love to hear if anyone can share ideas of why this is happening. Here is the source code for the EndScene hook:
using System;
using SlimDX;
using SlimDX.Direct3D9;
using System.Diagnostics;
using DirectShowLib;
using System.Runtime.InteropServices;
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
[System.Security.SuppressUnmanagedCodeSecurity]
[Guid("EA2829B9-F644-4341-B3CF-82FF92FD7C20")]
public interface IScene
{
unsafe int PassMemoryPtr(void* ptr, bool noheaders);
int SetBITMAPINFO([MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]byte[] ptr, bool noheaders);
}
public class Class1
{
object _lockRenderTarget = new object();
public string StatusMess { get; set; }
Surface _renderTarget;
//points to image bytes
unsafe void* bytesptr;
//used to store headers AND image bytes
byte[] bytes;
IFilterGraph2 ifg2;
ICaptureGraphBuilder2 icgb2;
IBaseFilter push;
IBaseFilter compressor;
IScene scene;
IBaseFilter mux;
IFileSinkFilter sink;
IMediaControl media;
bool NeedRunGraphInit = true;
bool NeedRunGraphClean = true;
DataStream s;
DataRectangle dr;
unsafe int EndSceneHook(IntPtr devicePtr)
{
int hr;
using (Device device = Device.FromPointer(devicePtr))
{
try
{
lock (_lockRenderTarget)
{
bool TimeToGrabFrame = false;
//....
//logic based on elapsed milliseconds deciding if it is time to grab another frame
if (TimeToGrabFrame)
{
//First ensure we have a Surface to render target data into
//called only once
if (_renderTarget == null)
{
//Create offscreen surface to use as copy of render target data
using (SwapChain sc = device.GetSwapChain(0))
{
//Att: created in system memory, not in video memory
_renderTarget = Surface.CreateOffscreenPlain(device, sc.PresentParameters.BackBufferWidth, sc.PresentParameters.BackBufferHeight, sc.PresentParameters.BackBufferFormat, Pool.SystemMemory);
} //end using
} // end if
using (Surface backBuffer = device.GetBackBuffer(0, 0))
{
//The following line is where main action takes place:
//Direct3D 9 back buffer gets copied to Surface _renderTarget,
//which has been connected by references to DirectShow's
//bitmap capture filter
//Inside the filter ( code not shown in this listing) the bitmap is periodically
//scanned to create a streaming video.
device.GetRenderTargetData(backBuffer, _renderTarget);
if (NeedRunGraphInit) //ran only once
{
ifg2 = (IFilterGraph2)new FilterGraph();
icgb2 = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
icgb2.SetFiltergraph(ifg2);
push = (IBaseFilter) new PushSourceFilter();
scene = (IScene)push;
//this way we get bitmapfile and bitmapinfo headers
//ToStream is slow, but run it only once to get the headers
s = Surface.ToStream(_renderTarget, ImageFileFormat.Bmp);
bytes = new byte[s.Length];
s.Read(bytes, 0, (int)s.Length);
hr = scene.SetBITMAPINFO(bytes, false);
//we just supplied the header to the PushSource
//filter. Let's pass reference to
//just image bytes from LockRectangle
dr = _renderTarget.LockRectangle(LockFlags.None);
s = dr.Data;
Result r = _renderTarget.UnlockRectangle();
bytesptr = s.DataPointer.ToPointer();
hr = scene.PassMemoryPtr(bytesptr, true);
//continue building graph
ifg2.AddFilter(push, "MyPushSource");
icgb2.SetOutputFileName(MediaSubType.Avi, "C:\foo.avi", out mux, out sink);
icgb2.RenderStream(null, null, push, null, mux);
media = (IMediaControl)ifg2;
media.Run();
NeedRunGraphInit = false;
NeedRunGraphClean = true;
StatusMess = "now capturing, press shift-F11 to stop";
} //end if
} // end using backbuffer
} // end if Time to grab frame
} //end lock
} // end try
//It is usually thrown when the user makes game window inactive
//or it is thrown deliberately when time is up, or the user pressed F11 and
//it resulted in stopping a capture.
//If it is thrown for another reason, it is still a good
//idea to stop recording and free the graph
catch (Exception ex)
{
//..
//stop the DirectShow graph and cleanup
} // end catch
//draw overlay
using (SlimDX.Direct3D9.Font font = new SlimDX.Direct3D9.Font(device, new System.Drawing.Font("Times New Roman", 26.0f, FontStyle.Bold)))
{
font.DrawString(null, StatusMess, 20, 100, System.Drawing.Color.FromArgb(255, 255, 255, 255));
}
return device.EndScene().Code;
} // end using device
} //end EndSceneHook
As it happens sometimes, I finally found an answer to this question myself, if anyone is interested. It turned out that backbuffer in some Direct3D9 apps is not necessarily refreshed each time the hooked EndScene is called. Hence, occasionally the backbuffer with the text overlay from the previous EndScene hook call was passed to the DirectShow source filter responsible for collecting input frames. I started stamping each frame with a tiny 3 pixel overlay with known RGB values and checking if this dummy overlay was still present before passing the frame to the DirectShow filter. If the overlay was there, the previously cached frame was passed instead of the current one. This approach effectively removed the text overlay from the video recorded in the DirectShow graph.
What is the simplest way to capture audio from the built in audio input and be able to read the raw sampled values (as in a .wav) in real time as they come in when requested, like reading from a socket.
Hopefully code that uses one of Apple's frameworks (Audio Queues). Documentation is not very clear, and what I need is very basic.
Try the AudioQueue Framework for this. You mainly have to perform 3 steps:
setup an audio format how to sample the incoming analog audio
start a new recording AudioQueue with AudioQueueNewInput()
Register a callback routine which handles the incoming audio data packages
In step 3 you have a chance to analyze the incoming audio data with AudioQueueGetProperty()
It's roughly like this:
static void HandleAudioCallback (void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc) {
// Here you examine your audio data
}
static void StartRecording() {
// now let's start the recording
AudioQueueNewInput (&aqData.mDataFormat, // The sampling format how to record
HandleAudioCallback, // Your callback routine
&aqData, // e.g. AudioStreamBasicDescription
NULL,
kCFRunLoopCommonModes,
0,
&aqData.mQueue); // Your fresh created AudioQueue
AudioQueueStart(aqData.mQueue,
NULL);
}
I suggest the Apple AudioQueue Services Programming Guide for detailled information about how to start and stop the AudioQueue and how to setup correctly all ther required objects.
You may also have a closer look into Apple's demo prog SpeakHere. But this is IMHO a bit confusing to start with.
It depends how ' real-time ' you need it
if you need it very crisp, go down right at the bottom level and use audio units. that means setting up an INPUT callback. remember, when this fires you need to allocate your own buffers and then request the audio from the microphone.
ie don't get fooled by the presence of a buffer pointer in the parameters... it is only there because Apple are using the same function declaration for the input and render callbacks.
here is a paste out of one of my projects:
OSStatus dataArrivedFromMic(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * dummy_notused )
{
OSStatus status;
RemoteIOAudioUnit* unitClass = (RemoteIOAudioUnit *)inRefCon;
AudioComponentInstance myUnit = unitClass.myAudioUnit;
AudioBufferList ioData;
{
int kNumChannels = 1; // one channel...
enum {
kMono = 1,
kStereo = 2
};
ioData.mNumberBuffers = kNumChannels;
for (int i = 0; i < kNumChannels; i++)
{
int bytesNeeded = inNumberFrames * sizeof( Float32 );
ioData.mBuffers[i].mNumberChannels = kMono;
ioData.mBuffers[i].mDataByteSize = bytesNeeded;
ioData.mBuffers[i].mData = malloc( bytesNeeded );
}
}
// actually GET the data that arrived
status = AudioUnitRender( (void *)myUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
& ioData );
// take MONO from mic
const int channel = 0;
Float32 * outBuffer = (Float32 *) ioData.mBuffers[channel].mData;
// get a handle to our game object
static KPRing* kpRing = nil;
if ( ! kpRing )
{
//AppDelegate * appDelegate = [UIApplication sharedApplication].delegate;
kpRing = [Game singleton].kpRing;
assert( kpRing );
}
// ... and send it the data we just got from the mic
[ kpRing floatsArrivedFromMic: outBuffer
count: inNumberFrames ];
return status;
}