I am learning JUCE and I am writing a program that just reads the input from the audio card and plays it back. Obviously this is just for learning purposes. I am using the audio application template. This is the code inside the getNextAudioBlock() function:
void getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill) override
{
if(true) // this is going to be replaced by checking the value of a button
{
const int channel = 0;
if(true) // this is going to be replaced too
{
const float* inBuffer = bufferToFill.buffer->getReadPointer(channel, bufferToFill.startSample);
float* outBuffer = bufferToFill.buffer->getWritePointer(channel, bufferToFill.startSample);
for(int sample = 0; sample < bufferToFill.numSamples; ++sample)
outBuffer[sample] = inBuffer[sample];
}
else
{
bufferToFill.buffer->clear(0, bufferToFill.startSample, bufferToFill.numSamples);
}
}
else
{
bufferToFill.buffer->clear(0, bufferToFill.startSample, bufferToFill.numSamples);
}
}
The code is really simple: the content from the input buffer is copied directly to the output buffer. However, I am not hearing anything. What am I doing wrong?
Related
I'm trying to create something with P5.js that resembles an audio looper—that is, you record a snippet of audio, it plays back to you as a loop, and then you can record other snippets of audio to play together to create a song.
I figured out how to record audio, play it back as a loop, and stop the loop, but I'm not sure the best way to repeat that function so that it can be used for buttons that function independently of each other and record separate audio files, as I would like to record multiple loops.
I'm still pretty new to P5.js, so there's probably a simple way to do this—any ideas help! In general, if you have any ideas on how to achieve this project as a whole (the audio looper) I'd love to hear them.
This is my code:
let mic, recorder, soundFile, button;
let state = 0;
function setup() {
createCanvas(windowWidth, windowHeight);
background(200);
mic = new p5.AudioIn();
mic.start();
recorder = new p5.SoundRecorder();
recorder.setInput(mic);
soundFile = new p5.SoundFile();
button = createButton("record");
button.size(200, 100);
button.style("font-family", "Bodoni");
button.style("font-size", "48px");
button.position(10, 10, 10);
button.mouseClicked(loopRecord);
}
// this is the looper
function loopRecord() {
if (state === 0 && mic.enabled) {
recorder.record(soundFile);
background(255, 0, 0);
state++;
} else if (state === 1) {
recorder.stop();
background(0, 255, 0);
state++;
} else if (state === 2) {
soundFile.loop();
state++;
} else if (state === 3) {
soundFile.stop();
state++;
} else if (state === 4) {
state === 0;
}
}
You should somehow structure your data so that each button has its own state (and other parameters that should not be global, but instead unique to each loop).
Here an example using an array of track objects:
https://editor.p5js.org/RemyDekor/sketches/gM75vBYmk
I'll copy the code here as well
let mic
// this array will be filled afterwards
let tracksArray = [];
function setup() {
createCanvas(400, 400);
background(200);
// we need only one mic (it is still global)
mic = new p5.AudioIn();
mic.start();
// we call a function that creates a custom "track" object and puts it in the tracksArray (which you could use for some other things, but is currently un-used)
createTrack();
// we create a button that call the same function when we click on it
const addTrackButton = createButton("add track");
addTrackButton.mouseClicked(createTrack)
addTrackButton.position(10, 10)
}
// We now use names (strings) instead of abstract numbers to set the state of the track
function handleTrackButtonClick(trackObject) {
if (trackObject.state === "idle" && mic.enabled) {
trackObject.recorder.record(trackObject.soundFile);
background(255, 0, 0);
trackObject.state = "recording";
console.dir(trackObject.button.elt.innerText = "[recording..]");
} else if (trackObject.state === "recording") {
trackObject.recorder.stop();
background(0, 255, 0);
trackObject.state = "stopped";
console.dir(trackObject.button.elt.innerText = "play");
} else if (trackObject.state === "stopped") {
trackObject.soundFile.loop();
trackObject.state = "playing";
console.dir(trackObject.button.elt.innerText = "[playing..] stop")
} else if (trackObject.state === "playing") {
trackObject.soundFile.stop();
trackObject.state = "stopped";
console.dir(trackObject.button.elt.innerText = "[stopped..] play")
}
}
function createTrack() {
const newButton = createButton("record");
newButton.style("font-family", "Bodoni");
newButton.style("font-size", "48px");
// we do not position this button anymore, and use the "flow" of the html document
newButton.style("display", "block");
// this is the "track" object we create, and we attach the previously global parameters/states on it
const newTrackObject = {
button: newButton,
state: "idle",
recorder: new p5.SoundRecorder(),
soundFile: new p5.SoundFile()
};
newButton.mouseClicked(function() {
handleTrackButtonClick(newTrackObject)
});
newTrackObject.recorder.setInput(mic);
tracksArray.push(newTrackObject);
}
I am running the Portable Device API to automatically get Photos from a connected Smart Phone. I have it all transferring correctly. The code that i use is that Standard DownloadFile() routine:
public PortableDownloadInfo DownloadFile(PortableDeviceFile file, string saveToPath)
{
IPortableDeviceContent content;
_device.Content(out content);
IPortableDeviceResources resources;
content.Transfer(out resources);
PortableDeviceApiLib.IStream wpdStream;
uint optimalTransferSize = 0;
var property = new _tagpropertykey
{
fmtid = new Guid(0xE81E79BE, 0x34F0, 0x41BF, 0xB5, 0x3F, 0xF1, 0xA0, 0x6A, 0xE8, 0x78, 0x42),
pid = 0
};
resources.GetStream(file.Id, ref property, 0, ref optimalTransferSize, out wpdStream);
System.Runtime.InteropServices.ComTypes.IStream sourceStream =
// ReSharper disable once SuspiciousTypeConversion.Global
(System.Runtime.InteropServices.ComTypes.IStream)wpdStream;
var filename = Path.GetFileName(file.Name);
if (string.IsNullOrEmpty(filename))
return null;
FileStream targetStream = new FileStream(Path.Combine(saveToPath, filename),
FileMode.Create, FileAccess.Write);
try
{
unsafe
{
var buffer = new byte[1024];
int bytesRead;
do
{
sourceStream.Read(buffer, 1024, new IntPtr(&bytesRead));
targetStream.Write(buffer, 0, 1024);
} while (bytesRead > 0);
targetStream.Close();
}
}
finally
{
Marshal.ReleaseComObject(sourceStream);
Marshal.ReleaseComObject(wpdStream);
}
return pdi;
}
}
There are two problems with this standard code:
1) - when the images are saves to the windows machine, there is no EXIF information. this information is what i need. how do i preserve it?
2) the saved files are very bloated. for example, the source jpeg is 1,045,807 bytes, whilst the downloaded file is 3,942,840 bytes!. it is similar to all of the other files. I would of thought that the some inside the unsafe{} section would output it byte for byte? Is there a better way to transfer the data? (a safe way?)
Sorry about this. it works fine.. it is something else that is causing these issues
I want to make a soundboard in the Processing language that plays sounds so the computer handles the sounds as if they were inputs from my microphone. This is my only problem about doing a soundboard. How do I make the sounds play as if they were recorded by the microphone?
I have spent an hour searching and trying to get help, but I have nothing to work with.
Minim provides the class AudioInput for monitoring the user’s current record source (this is often set in the sound card control panel), such as the microphone or the line-in
from
http://code.compartmental.net/tools/minim/quickstart/
EDIT:
Have you seen this?
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
// for recording
AudioInput in;
AudioRecorder recorder;
// for playing back
AudioOutput out;
FilePlayer player;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// get a stereo line-in: sample buffer length of 2048
// default sample rate is 44100, default bit depth is 16
in = minim.getLineIn(Minim.STEREO, 2048);
// create an AudioRecorder that will record from in to the filename specified.
// the file will be located in the sketch's main folder.
recorder = minim.createRecorder(in, "myrecording.wav");
// get an output we can playback the recording on
out = minim.getLineOut( Minim.STEREO );
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms
// the values returned by left.get() and right.get() will be between -1 and 1,
// so we need to scale them up to see the waveform
for(int i = 0; i < in.left.size()-1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
if ( recorder.isRecording() )
{
text("Now recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data,
// you must callstartRecording() and stopRecording() on the AudioRecorder object.
// You can start and stop as many times as you like, the audio data will
// be appended to the end of to the end of the file.
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to a file of the type we specified in setup
// in the case of buffered recording,
// this will appear to freeze the sketch for sometime, if the buffer is large
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// save returns the recorded audio in an AudioRecordingStream,
// which we can then play with a FilePlayer
if ( player != null )
{
player.unpatch( out );
player.close();
}
player = new FilePlayer( recorder.save() );
player.patch( out );
player.play();
}
}
It's from here:
http://code.compartmental.net/minim/audiorecorder_class_audiorecorder.html
I'm currently experimenting with sending a string to my Arduino Yun and trying to get it to reply back depending on what I send it.
I picked up a framework of some code here and have been experimenting with it but apart from the serial monitor displaying 'ready' I can't make it go any further.
The code is:
//declace a String to hold what we're inputting
String incomingString;
void setup() {
//initialise Serial communication on 9600 baud
Serial.begin(9600);
while(!Serial);
//delay(4000);
Serial.println("Ready!");
// The incoming String built up one byte at a time.
incomingString = "";
}
void loop () {
// Check if there's incoming serial data.
if (Serial.available() > 0) {
// Read a byte from the serial buffer.
char incomingByte = (char)Serial.read();
incomingString += incomingByte;
// Checks for null termination of the string.
if (incomingByte == '\0') {
// ...do something with String...
if(incomingString == "hello") {
Serial.println("Hello World!");
}
incomingString = "";
}
}
}
Can anyone point me in the right direction?
Thanks
I suspect the problem is that you're adding the null terminator onto the end of your string when you do: incomingString += incomingByte. When you're working with string objects (as opposed to raw char * strings) you don't need to do that. The object will take care of termination on its own.
The result is that your if condition is effectively doing this: if ("hello\0" == "hello") .... Obviously they're not equal, so the condition always fails.
I believe the solution is just to make sure you don't append the byte if it's null.
Try This:
String IncomingData = "";
String Temp = "";
char = var;
void setup()
{
Serial.begin(9600);
//you dont have to use it but if you want
// if(Serial)
{
Serial.println("Ready");
}
//or
while(!Serial)
{delay(5);}
Serial.println("Ready");
void loop()
{
while(Serial.available())
{
var = Serial.read();
Temp = String(var);
IncomingData+= Temp;
//or
IncomingData.concat(Temp);
// you can try
IncomindData += String(var);
}
Serial.println(IncomingData);
IncomingData = "";
}
What is the simplest way to capture audio from the built in audio input and be able to read the raw sampled values (as in a .wav) in real time as they come in when requested, like reading from a socket.
Hopefully code that uses one of Apple's frameworks (Audio Queues). Documentation is not very clear, and what I need is very basic.
Try the AudioQueue Framework for this. You mainly have to perform 3 steps:
setup an audio format how to sample the incoming analog audio
start a new recording AudioQueue with AudioQueueNewInput()
Register a callback routine which handles the incoming audio data packages
In step 3 you have a chance to analyze the incoming audio data with AudioQueueGetProperty()
It's roughly like this:
static void HandleAudioCallback (void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc) {
// Here you examine your audio data
}
static void StartRecording() {
// now let's start the recording
AudioQueueNewInput (&aqData.mDataFormat, // The sampling format how to record
HandleAudioCallback, // Your callback routine
&aqData, // e.g. AudioStreamBasicDescription
NULL,
kCFRunLoopCommonModes,
0,
&aqData.mQueue); // Your fresh created AudioQueue
AudioQueueStart(aqData.mQueue,
NULL);
}
I suggest the Apple AudioQueue Services Programming Guide for detailled information about how to start and stop the AudioQueue and how to setup correctly all ther required objects.
You may also have a closer look into Apple's demo prog SpeakHere. But this is IMHO a bit confusing to start with.
It depends how ' real-time ' you need it
if you need it very crisp, go down right at the bottom level and use audio units. that means setting up an INPUT callback. remember, when this fires you need to allocate your own buffers and then request the audio from the microphone.
ie don't get fooled by the presence of a buffer pointer in the parameters... it is only there because Apple are using the same function declaration for the input and render callbacks.
here is a paste out of one of my projects:
OSStatus dataArrivedFromMic(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * dummy_notused )
{
OSStatus status;
RemoteIOAudioUnit* unitClass = (RemoteIOAudioUnit *)inRefCon;
AudioComponentInstance myUnit = unitClass.myAudioUnit;
AudioBufferList ioData;
{
int kNumChannels = 1; // one channel...
enum {
kMono = 1,
kStereo = 2
};
ioData.mNumberBuffers = kNumChannels;
for (int i = 0; i < kNumChannels; i++)
{
int bytesNeeded = inNumberFrames * sizeof( Float32 );
ioData.mBuffers[i].mNumberChannels = kMono;
ioData.mBuffers[i].mDataByteSize = bytesNeeded;
ioData.mBuffers[i].mData = malloc( bytesNeeded );
}
}
// actually GET the data that arrived
status = AudioUnitRender( (void *)myUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
& ioData );
// take MONO from mic
const int channel = 0;
Float32 * outBuffer = (Float32 *) ioData.mBuffers[channel].mData;
// get a handle to our game object
static KPRing* kpRing = nil;
if ( ! kpRing )
{
//AppDelegate * appDelegate = [UIApplication sharedApplication].delegate;
kpRing = [Game singleton].kpRing;
assert( kpRing );
}
// ... and send it the data we just got from the mic
[ kpRing floatsArrivedFromMic: outBuffer
count: inNumberFrames ];
return status;
}