How to reverse audio wave using processing - audio

Is there a way to analyize the audio recorded by the application and reverse its wave? for example in Analog Audio the wave of sound is like a sinwave either 0,1,-1. I want to reverse that so that 1 will be -1 and the -1 be 1. How to do that using processing software?

Nikos is correct that the operation you are looking for is called Invert and not reverse. This achieved simply by multiplying every sample by -1.
The best way to do this is to use Minim, processing's audio library. You can extend the UGen class in order to make a new effects processor that flips every sample that goes through it. I've included an example below that works with a sine wave. You can change this around to be some other audio source and to draw it however you like.
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
void setup()
{
size(300, 200, P2D);
minim = new Minim(this);
out = minim.getLineOut();
Oscil osc;
Invert inv;
Constant cutoff;
// initialize the oscillator
// (a sawtooth wave has energy across the spectrum)
osc = new Oscil(500, 0.2, Waves.SINE);
inv = new Invert();
osc.patch(inv).patch(out);
}
void draw()
{
background( 0 );
}
public class Invert extends UGen{
public UGenInput audio;
Invert()
{
audio = new UGenInput(InputType.AUDIO);
}
#Override
protected void uGenerate (float[] channels)
{
if ( audio.isPatched() )
{
for (int i = 0; i < channels.length; i++){
// this is where we multiple each sample by -1
channels[i] = audio.getLastValues()[i] * -1;
}
}
}
}

Related

Realtime 3D Audio Streaming and Playback

Currently, I am working on streaming audio to Unity over network, I successfully integrated media library (GStreamer) with Unity and I was able to play the audio inside the environment using audio filter callback function attached to AudioSource:
void OnAudioFilterRead(float[] data, int channels)
{
// fill data array with the streamed audio data
//....
}
The previous function provided 2D audio playback with very low latency,
In my application I want to render the audio in 3D Spatial space, so the audio rendering will be dependent on camera's (Audio listener) orientation.
I tried to stream audio data into AudioClip using the following:
AudioClip TargetClip;
public AudioSource TargetSrc;
void Start()
{
int freq=32000; //streamed audio sampling rate
TargetClip = AudioClip.Create ("test_Clip", freq, 1, freq, true,true, OnAudioRead,OnAudioSetPosition);
TargetSrc.clip = TargetClip;
TargetSrc.Play ();
}
void OnAudioRead(float[] data) {
// fill data array with the streamed audio data
//....
}
void OnAudioSetPosition(int newPosition) {
}
When I played the audio, the audio was rendered as I wanted in 3D spatial space, however there was a huge latency (more than 2 seconds).
Is there any way to solve the latency problem?
I figured out how to solve this issue.
For those facing similar problem, actually the method I was using in the first place was not efficient by filling the AudioClip data. I was supposed to use OnAudioFilterRead() in either cases, however it should be using as following for spatial audio calculations:
public AudioSource TargetSrc;
void Start()
{
var dummy = AudioClip.Create ("dummy", 1, 1, AudioSettings.outputSampleRate, false);
dummy.SetData(new float[] { 1 }, 0);
TargetSrc.clip = dummy; //just to let unity play the audiosource
TargetSrc.loop = true;
TargetSrc.spatialBlend=1;
TargetSrc.Play ();
}
void OnAudioFilterRead(float[] data, int channels)
{
// "data" contains the weights of spatial calculations ready by unity
// multiply "data" with streamed audio data
for(int i=0;i<data.Length;++i)
{
data[i]*=internal_getSample(i);
}
}
Hope this would help some else.

How to play sound onto microphone?

I want to make a soundboard in the Processing language that plays sounds so the computer handles the sounds as if they were inputs from my microphone. This is my only problem about doing a soundboard. How do I make the sounds play as if they were recorded by the microphone?
I have spent an hour searching and trying to get help, but I have nothing to work with.
Minim provides the class AudioInput for monitoring the user’s current record source (this is often set in the sound card control panel), such as the microphone or the line-in
from
http://code.compartmental.net/tools/minim/quickstart/
EDIT:
Have you seen this?
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
// for recording
AudioInput in;
AudioRecorder recorder;
// for playing back
AudioOutput out;
FilePlayer player;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// get a stereo line-in: sample buffer length of 2048
// default sample rate is 44100, default bit depth is 16
in = minim.getLineIn(Minim.STEREO, 2048);
// create an AudioRecorder that will record from in to the filename specified.
// the file will be located in the sketch's main folder.
recorder = minim.createRecorder(in, "myrecording.wav");
// get an output we can playback the recording on
out = minim.getLineOut( Minim.STEREO );
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms
// the values returned by left.get() and right.get() will be between -1 and 1,
// so we need to scale them up to see the waveform
for(int i = 0; i < in.left.size()-1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
if ( recorder.isRecording() )
{
text("Now recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data,
// you must callstartRecording() and stopRecording() on the AudioRecorder object.
// You can start and stop as many times as you like, the audio data will
// be appended to the end of to the end of the file.
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to a file of the type we specified in setup
// in the case of buffered recording,
// this will appear to freeze the sketch for sometime, if the buffer is large
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// save returns the recorded audio in an AudioRecordingStream,
// which we can then play with a FilePlayer
if ( player != null )
{
player.unpatch( out );
player.close();
}
player = new FilePlayer( recorder.save() );
player.patch( out );
player.play();
}
}
It's from here:
http://code.compartmental.net/minim/audiorecorder_class_audiorecorder.html

TarsosDSP Pitch Detection from .wav file. And the result frequency is always less than half

I'm trying to use TarsosDSP library to detect pitch from a .wav file, and the result of frequency is always less than half.
Here is my code.
public class Main {
public static void main(String[] args){
try{
float sampleRate = 44100;
int audioBufferSize = 2048;
int bufferOverlap = 0;
//Create an AudioInputStream from my .wav file
URL soundURL = Main.class.getResource("/DetectPicthFromWav/329.wav");
AudioInputStream stream = AudioSystem.getAudioInputStream(soundURL);
//Convert into TarsosDSP API
JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
AudioDispatcher dispatcher = new AudioDispatcher(audioStream, audioBufferSize, bufferOverlap);
MyPitchDetector myPitchDetector = new MyPitchDetector();
dispatcher.addAudioProcessor(new PitchProcessor(PitchEstimationAlgorithm.YIN, sampleRate, audioBufferSize, myPitchDetector));
dispatcher.run();
}
catch(FileNotFoundException fne){fne.printStackTrace();}
catch(UnsupportedAudioFileException uafe){uafe.printStackTrace();}
catch(IOException ie){ie.printStackTrace();}
}
}
class MyPitchDetector implements PitchDetectionHandler{
//Here the result of pitch is always less than half.
#Override
public void handlePitch(PitchDetectionResult pitchDetectionResult,
AudioEvent audioEvent) {
if(pitchDetectionResult.getPitch() != -1){
double timeStamp = audioEvent.getTimeStamp();
float pitch = pitchDetectionResult.getPitch();
float probability = pitchDetectionResult.getProbability();
double rms = audioEvent.getRMS() * 100;
String message = String.format("Pitch detected at %.2fs: %.2fHz ( %.2f probability, RMS: %.5f )\n", timeStamp,pitch,probability,rms);
System.out.println(message);
}
}
}
The 329.wav file is generated from http://onlinetonegenerator.com/ website with 329Hz.
I don't know why the result pitch is always 164.5Hz. Is there any problem in my code?
Well I don't know what methods you are using, but by looking at how the frequency is exactly halved, it could be a problem of wrong sample rate being set?
Most operations assume an initial sample rate when the signal was sampled, maybe you've passed it as an argument (or its default value is) half that?
I just had the same problem with TarsosDSP on Android. For me the answer was that the file from http://onlinetonegenerator.com/ has 32-bit samples instead of 16-bit, which appears to be the default. Relevant code:
AssetFileDescriptor afd = getAssets().openFd("440.wav"); // 440Hz sine wave
InputStream is = afd.createInputStream();
TarsosDSPAudioFormat audioFormat = new TarsosDSPAudioFormat(
/* sample rate */ 44100,
/* HERE sample size in bits */ 32,
/* number of channels */ 1,
/* signed/unsigned data */ true,
/* big-endian byte order */ false
);
AudioDispatcher dispatcher = new AudioDispatcher(new UniversalAudioInputStream(is, audioFormat), 2048, 0);
PitchDetectionHandler pdh = ...
AudioProcessor p = new PitchProcessor(PitchProcessor.PitchEstimationAlgorithm.FFT_YIN, 44100, 2048, pdh);
dispatcher.addAudioProcessor(p);
new Thread(dispatcher, "Audio Dispatcher").start();

Unity3D Play sound from particle

I am trying to play a sound when a particle collides with a wall. Right now, it just plays the sound from the parent object, which is the player.
However, I want the sound to play from the particle. Which means when a particle is far to the left, you vaguely hear the sound coming from the left.
Is there a way to play the sound from a particle, when it collides?
You can use OnParticleCollision and the ParticlePhysicsExtensions, and play a sound with PlayClipAtPoint:
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(ParticleSystem))]
public class CollidingParticles : MonoBehaviour {
public AudioClip collisionSFX;
ParticleSystem partSystem;
ParticleCollisionEvent[] collisionEvents;
void Awake () {
partSystem = GetComponent<ParticleSystem>();
collisionEvents = new ParticleCollisionEvent[16];
}
void OnParticleCollision (GameObject other) {
int safeLength = partSystem.GetSafeCollisionEventSize();
if (collisionEvents.Length < safeLength)
collisionEvents = new ParticleCollisionEvent[safeLength];
int totalCollisions = partSystem.GetCollisionEvents(other, collisionEvents);
for (int i = 0; i < totalCollisions; i++)
AudioSource.PlayClipAtPoint(collisionSFX, collisionEvents[i].intersection);
print (totalCollisions);
}
}
The problem is that the temporary AudioSource created by PlayClipAtPoint cannot be retrieved (to set it as 3D sound). So you are better off creating your own PlayClipAtPoint method that instantiates a prefab, already configured with a 3D AudioSource and the clip you want, and run Destroy(instance, seconds) to mark it for timed destruction.
The only way I can imagine is overriding animation of particle system via GetParticles/SetParticles. Thus you can provide your own collision detection for particles with Physics.RaycastAll and play sound when collisions occured.
AudioSource audioSourcee;
public float timerToPlay;
float timerToSave;
void Start()
{
timerToSave = timerToPlay;
}
void OnEnable()
{
timerToPlay = timerToSave;
}
// Update is called once per frame
void Update()
{
if(timerToPlay>0)
timerToPlay -= Time.deltaTime;
if(timerToPlay<=0)
audioSourcee.Play();
}

minim loadFile before audio file is recorded/created

I am trying to create an app that would allow the user some sounds and then use this in a playback fashion.
I would like to have my application play a .wav file that the user will record.
I am having trouble figuring out how to code this, as I keep getting a error.
==== JavaSound Minim Error ====
==== Error invoking createInput on the file loader object: null
Snippet of code:
import ddf.minim.*;
AudioInput in;
AudioRecorder recorder;
RadioButtons r;
boolean showGUI = false;
color bgCol = color(0);
Minim minim;
//Recording players
AudioPlayer player1;
AudioPlayer player2;
void newFile()
{
countname =(name+1);
recorder = minim.createRecorder(in, "data/" + countname + ".wav", true);
}
......
void setup(){
minim = new Minim(this);
in = minim.getLineIn(Minim.MONO, 2048);
newFile();
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
void draw() {
// Draw the image to the screen at coordinate (0,0)
image(img,0,0);
//recording button
if(r.get() == 0)
{
for(int i = 0; i < in.left.size()-1; i++)
}
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
//play button
if(r.get() == 1)
{
if(mousePressed){
.......
player_1.cue(0);
player_1.play();
}
if(mousePressed){
.......
player_2.cue(0);
player_2.play();
}
}
The place where I have a problem is here:
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
The files that will be recorded will be 1.wav, 2.wav. But I can not place this in the
player1.minim.loadFile ("1.wav");
player2.mminim.loadFile("2.wav");
How would I do this?
As indicated in the JavaDoc page for AudioRecorder [1], calls to beginRecord(), endRecord() and save() will need to happen so that whatever you want to record is actually recorded and then also saved to disk. As long as that does not happen there is nothing for loadFile() to load and you will therefore receive errors. So the problem lies in your program flow. Only when your program reaches a state where a file has already been recorded and saved, you can actually load that.
There are probably also ways for you to play back whatever is being recorded right at the moment it arrives in your audio input buffer (one would usually refer to such as 'monitoring'), but as i understand it, that is not what you want.
Aside this general conceptual flaw there also seem to be other problems in your code, e.g. countname is not being iterated between two subsequent loadFile calls (I assume that it should be iterated though); Also at some point you have "player_1.play();" (note the underscore), although you're probably refering to this, differently written variable earlier initialized with "player1 = minim.loadFile(...)" ? ...
[1] http://code.compartmental.net/minim/javadoc/ddf/minim/AudioRecorder.html
This is the approach to record from an audio file into an AudioRecorder object. You load a file, play it and then you choose what section to save into another file that you can play using and AudioPlayer object or your favorite sound player offered by your OS.
Related to
I am having trouble figuring out how to code this, as I keep getting a
error.
Despite it says it is an error, it doesn't affect executing your program. I would consider this a warning and ignore it. If you want to fix it, I believe you will need to edit the file's tags to properly set their values.
INSTRUCTIONS: In the code, define your file to play. When you run the sketch, press r to begin recording, r again to stop recording. Don't forget to press s to save the file to an audio file which will be located in the data folder.
NOTE: If you need to play wav files, you will need a Sampler object instead of a FilePlayer one.
//REFERENCE: https:// forum.processing.org/one/topic/how-can-i-detect-sound-with-my-mic-in-my-computer.html
//REFERENCE: https:// forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim
/**
* This sketch demonstrates how to use an <code>AudioRecorder</code> to record audio to disk.
* Press 'r' to toggle recording on and off and the press 's' to save to disk.
* The recorded file will be placed in the sketch folder of the sketch.
* <p>
* For more information about Minim and additional features,
* visit http://code.compartmental.net/minim/" target="_blank" rel="nofollow">http://code.compartmental.net/minim/</a>;
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
FilePlayer player;
AudioOutput out;
AudioRecorder recorder;
void setup()
{
size(512, 200, P3D);
textFont(createFont("Arial", 12));
minim = new Minim(this);
player = new FilePlayer(minim.loadFileStream("energeticDJ.mp3"));
// IT DOESN'T WORK FOR WAV files ====> player = new FilePlayer(minim.loadFileStream("fair1939.wav"));
out = minim.getLineOut();
TickRate rateControl = new TickRate(1.f);
player.patch(rateControl).patch(out);
recorder = minim.createRecorder(out, dataPath("myrecording.wav"),true);
player.loop(0);
}
void draw()
{
background(0);
stroke(255);
// draw a line to show where in the song playback is currently located
float posx = map(player.position(), 0, player.length(), 0, width);
stroke(0, 200, 0);
line(posx, 0, posx, height);
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
} else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
} else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println("Done saving.");
}
}

Resources