Duration of an amr audio file - linux

i want to find the duration of an audio file of type "amr" without converting it to other audio formats
with any way?
AK

I have coded the following in objective-C to get the duration of a movie. This can similarly be used to get the duration of audio too:
-(double)durationOfMovieAtPath:(NSString*)inMoviePath
{
double durationToReturn = -1;
NSFileManager *fm = [NSFileManager defaultManager];
if ([fm fileExistsAtPath:inMoviePath])
{
av_register_all();
AVFormatContext *inMovieFormat = NULL;
inMovieFormat = avformat_alloc_context();
int errorCode = av_open_input_file(&inMovieFormat, [inMoviePath UTF8String], NULL, 0, NULL);
//double durationToReturn = (double)inMovieFormat->duration / AV_TIME_BASE;
if (0==errorCode)
{
// only on success
int numberOfStreams = inMovieFormat->nb_streams;
AVStream *videoStream = NULL;
for (int i=0; i<numberOfStreams; i++)
{
AVStream *st = inMovieFormat->streams[i];
if (st->codec->codec_type == CODEC_TYPE_VIDEO)
{
videoStream = st;
break;
}
}
double divideFactor;
// The duraion in AVStream is set in accordance with the time_base of AVStream, so we need to fetch back the duration using this factor
divideFactor = (double)1/rationalToDouble(videoStream->time_base);
if (NULL!=videoStream)
durationToReturn = (double)videoStream->duration / divideFactor;
//DEBUGLOG (#"Duration of movie at path: %# = %0.3f", inMoviePath, durationToReturn);
}
else
{
DEBUGLOG (#"avformat_alloc_context error code = %d", errorCode);
}
if (nil!=inMovieFormat)
{
av_close_input_file(inMovieFormat);
//av_free(inMovieFormat);
}
}
return durationToReturn;
}
Change the CODEC_TYPE_VIDEO to CODEC_TYPE_AUDIO and I think it should work for you.

Related

Detect voice by audio recorder in android studio

Well, I would like to implement a function, such when the application starts, the recorder will start to recording, and when the user keeps silence there is nothing going to happen until the user speaks. Then, it will save the PCM file of user's voice and then stop recording.
Voice Detection in Android Application
Above is the question I have found similar as mine, but the answer of this link can not work. And I don't know how to modify it, since I don't understand the concept of the code.
Please help me~
Well, I solved my problem, here is my solution.
I modified the code came from this url:
Voice Detection in Android Application
private static final String TAG = "MainActivity";
private static int RECORDER_SAMPLERATE = 44100;
private static int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
private static int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private Button btn, btn_convert, btn_play;
private TextView txv;
boolean isRecording = false;
private File file;
private AudioRecord audioRecord;
int bufferSizeInBytes = 0;
Context context = MainActivity.this;
// path
final String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/final.pcm" ;
final String outpath = path.replace(".pcm", ".wav");
public void autoRecording(){
// Get the minimum buffer size required for the successful creation of an AudioRecord object.
bufferSizeInBytes = AudioRecord.getMinBufferSize( RECORDER_SAMPLERATE,
RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING
);
// Initialize Audio Recorder.
AudioRecord audioRecorder = new AudioRecord( MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE,
RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING,
bufferSizeInBytes
);
// Start Recording.
txv.setText("Ing");
audioRecorder.startRecording();
isRecording = true;
// for auto stop
int numberOfReadBytes = 0;
byte audioBuffer[] = new byte[bufferSizeInBytes];
boolean recording = false;
float tempFloatBuffer[] = new float[3];
int tempIndex = 0;
// create file
file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/final.pcm");
Log.d(TAG, "recording: file path:" + file.toString());
if (file.exists()){
Log.d(TAG,"file exist, delete file");
file.delete();
}
try {
Log.d(TAG,"file created");
file.createNewFile();
} catch (IOException e) {
Log.d(TAG,"didn't create the file:" + e.getMessage());
throw new IllegalStateException("did not create file:" + file.toString());
}
// initiate media scan and put the new things into the path array to
// make the scanner aware of the location and the files you want to see
MediaScannerConnection.scanFile(context, new String[] {file.toString()}, null, null);
// output stream
OutputStream os = null;
DataOutputStream dos = null;
try {
os = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(os);
dos = new DataOutputStream(bos);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// While data come from microphone.
while( true )
{
float totalAbsValue = 0.0f;
short sample = 0;
numberOfReadBytes = audioRecorder.read( audioBuffer, 0, bufferSizeInBytes );
// Analyze Sound.
for( int i=0; i<bufferSizeInBytes; i+=2 )
{
sample = (short)( (audioBuffer[i]) | audioBuffer[i + 1] << 8 );
totalAbsValue += (float)Math.abs( sample ) / ((float)numberOfReadBytes/(float)2);
}
// read in file
for (int i = 0; i < numberOfReadBytes; i++) {
try {
dos.writeByte(audioBuffer[i]);
} catch (IOException e) {
e.printStackTrace();
}
}
// Analyze temp buffer.
tempFloatBuffer[tempIndex%3] = totalAbsValue;
float temp = 0.0f;
for( int i=0; i<3; ++i )
temp += tempFloatBuffer[i];
if( (temp >=0 && temp <= 2100) && recording == false ) // the best number for close to device: 3000
{ // the best number for a little bit distance : 2100
Log.i("TAG", "1");
tempIndex++;
continue;
}
if( temp > 2100 && recording == false )
{
Log.i("TAG", "2");
recording = true;
}
if( (temp >= 0 && temp <= 2100) && recording == true )
{
Log.i("TAG", "final run");
//isRecording = false;
txv.setText("Stop Record.");
//*/
tempIndex++;
audioRecorder.stop();
try {
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
break;
}
}
}
The function of this function:
if you call this function, the recorder will start recording, and once you make sound(Notify if there are some noise it will stop too.) it will stop recording and save into file(pcm format).

NAudio Mp3 decoding click and pops

I followed this NAudio Demo modified to play ShoutCast.
In my full code I have to resample the incoming audio and stream it again over the network to a network player. Since I get many "clicks and pops", I came back to the demo code and I found that these artifacts are originated after the decoding block.
If I save the incoming stream in mp3 format, it is pretty clear.
When I save the raw decoded data (without other processing than the decoder) I get many audio artifacts.
I wonder whether I am doing some error, even if my code is almost equal to the NAudio demo.
Here the function from the example as modified by me to save the raw data. It is called as a new Thread.
private void StreamMP3(object state)
{
//Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
//SettingsSection section = (SettingsSection)config.GetSection("system.net/settings");
this.fullyDownloaded = false;
string url = "http://icestreaming.rai.it/5.mp3";//(string)state;
webRequest = (HttpWebRequest)WebRequest.Create(url);
int metaInt = 0; // blocksize of mp3 data
int framesize = 0;
webRequest.Headers.Clear();
webRequest.Headers.Add("GET", "/ HTTP/1.0");
// needed to receive metadata informations
webRequest.Headers.Add("Icy-MetaData", "1");
webRequest.UserAgent = "WinampMPEG/5.09";
HttpWebResponse resp = null;
try
{
resp = (HttpWebResponse)webRequest.GetResponse();
}
catch (WebException e)
{
if (e.Status != WebExceptionStatus.RequestCanceled)
{
ShowError(e.Message);
}
return;
}
byte[] buffer = new byte[16384 * 4]; // needs to be big enough to hold a decompressed frame
try
{
// read blocksize to find metadata block
metaInt = Convert.ToInt32(resp.GetResponseHeader("icy-metaint"));
}
catch
{
}
IMp3FrameDecompressor decompressor = null;
byteOut = createNewFile(destPath, "salva", "raw");
try
{
using (var responseStream = resp.GetResponseStream())
{
var readFullyStream = new ReadFullyStream(responseStream);
readFullyStream.metaInt = metaInt;
do
{
if (mybufferedWaveProvider != null && mybufferedWaveProvider.BufferLength - mybufferedWaveProvider.BufferedBytes < mybufferedWaveProvider.WaveFormat.AverageBytesPerSecond / 4)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
}
else
{
Mp3Frame frame = null;
try
{
frame = Mp3Frame.LoadFromStream(readFullyStream, true);
if (metaInt > 0)
UpdateSongName(readFullyStream.SongName);
else
UpdateSongName("No Song Info in Stream...");
}
catch (EndOfStreamException)
{
this.fullyDownloaded = true;
// reached the end of the MP3 file / stream
break;
}
catch (WebException)
{
// probably we have aborted download from the GUI thread
break;
}
if (decompressor == null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
WaveFormat waveFormat = new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate);
decompressor = new AcmMp3FrameDecompressor(waveFormat);
this.mybufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
this.mybufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(200); // allow us to get well ahead of ourselves
framesize = (decompressor.OutputFormat.Channels * decompressor.OutputFormat.SampleRate * (decompressor.OutputFormat.BitsPerSample / 8) * 20) / 1000;
//this.bufferedWaveProvider.BufferedDuration = 250;
}
int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
//Debug.WriteLine(String.Format("Decompressed a frame {0}", decompressed));
mybufferedWaveProvider.AddSamples(buffer, 0, decompressed);
while (mybufferedWaveProvider.BufferedDuration.Milliseconds >= 20)
{
byte[] read = new byte[framesize];
mybufferedWaveProvider.Read(read, 0, framesize);
byteOut.Write(read, 0, framesize);
}
}
} while (playbackState != StreamingPlaybackState.Stopped);
Debug.WriteLine("Exiting");
// was doing this in a finally block, but for some reason
// we are hanging on response stream .Dispose so never get there
decompressor.Dispose();
}
}
finally
{
if (decompressor != null)
{
decompressor.Dispose();
}
}
}
OK i found the problem. I included the shoutcast metadata to the MP3Frame.
See the comment "HERE I COLLECT THE BYTES OF THE MP3 FRAME" to locate the correct point to get the MP3 frame with no streaming metadata.
The following code runs without audio artifacts:
private void SHOUTcastReceiverThread()
{
//-*- String server = "http://216.235.80.18:8285/stream";
//String serverPath = "/";
//String destPath = "C:\\temp\\"; // destination path for saved songs
HttpWebRequest request = null; // web request
HttpWebResponse response = null; // web response
int metaInt = 0; // blocksize of mp3 data
int count = 0; // byte counter
int metadataLength = 0; // length of metadata header
string metadataHeader = ""; // metadata header that contains the actual songtitle
string oldMetadataHeader = null; // previous metadata header, to compare with new header and find next song
//CircularQueueStream framestream = new CircularQueueStream(2048);
QueueStream framestream = new QueueStream();
framestream.Position = 0;
bool bNewSong = false;
byte[] buffer = new byte[512]; // receive buffer
byte[] dec_buffer = new byte[decSIZE];
Mp3Frame frame;
IMp3FrameDecompressor decompressor = null;
Stream socketStream = null; // input stream on the web request
// create web request
request = (HttpWebRequest)WebRequest.Create(server);
// clear old request header and build own header to receive ICY-metadata
request.Headers.Clear();
request.Headers.Add("GET", serverPath + " HTTP/1.0");
request.Headers.Add("Icy-MetaData", "1"); // needed to receive metadata informations
request.UserAgent = "WinampMPEG/5.09";
// execute request
try
{
response = (HttpWebResponse)request.GetResponse();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return;
}
// read blocksize to find metadata header
metaInt = Convert.ToInt32(response.GetResponseHeader("icy-metaint"));
try
{
// open stream on response
socketStream = response.GetResponseStream();
var readFullyStream = new ReadFullyStream(socketStream);
frame = null;
// rip stream in an endless loop
do
{
if (IsBufferNearlyFull)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
frame = null;
}
else
{
int bufLen = readFullyStream.Read(buffer, 0, buffer.Length);
try
{
if (framestream.CanRead && framestream.Length > 512)
frame = Mp3Frame.LoadFromStream(framestream);
else
frame = null;
}
catch (Exception ex)
{
frame = null;
}
if (bufLen < 0)
{
Debug.WriteLine("Buffer error 1: exit.");
return;
}
// processing RAW data
for (int i = 0; i < bufLen; i++)
{
// if there is a header, the 'headerLength' would be set to a value != 0. Then we save the header to a string
if (metadataLength != 0)
{
metadataHeader += Convert.ToChar(buffer[i]);
metadataLength--;
if (metadataLength == 0) // all metadata informations were written to the 'metadataHeader' string
{
string fileName = "";
string fileNameRaw = "";
// if songtitle changes, create a new file
if (!metadataHeader.Equals(oldMetadataHeader))
{
// flush and close old byteOut stream
if (byteOut != null)
{
byteOut.Flush();
byteOut.Close();
byteOut = null;
}
if (byteOutRaw != null)
{
byteOutRaw.Flush();
byteOutRaw.Close();
byteOutRaw = null;
}
timeStart = timeEnd;
// extract songtitle from metadata header. Trim was needed, because some stations don't trim the songtitle
//fileName = Regex.Match(metadataHeader, "(StreamTitle=')(.*)(';StreamUrl)").Groups[2].Value.Trim();
fileName = Regex.Match(metadataHeader, "(StreamTitle=')(.*)(';)").Groups[2].Value.Trim();
// write new songtitle to console for information
if (fileName.Length == 0)
fileName = "shoutcast_test";
fileNameRaw = fileName + "_raw";
framestream.reSetPosition();
SongChanged(this, metadataHeader);
bNewSong = true;
// create new file with the songtitle from header and set a stream on this file
timeEnd = DateTime.Now;
if (bWrite_to_file)
{
byteOut = createNewFile(destPath, fileName, "mp3");
byteOutRaw = createNewFile(destPath, fileNameRaw, "raw");
}
timediff = timeEnd - timeStart;
// save new header to 'oldMetadataHeader' string, to compare if there's a new song starting
oldMetadataHeader = metadataHeader;
}
metadataHeader = "";
}
}
else // write mp3 data to file or extract metadata headerlength
{
if (count++ < metaInt) // write bytes to filestream
{
//HERE I COLLECT THE BYTES OF THE MP3 FRAME
framestream.Write(buffer, i, 1);
}
else // get headerlength from lengthbyte and multiply by 16 to get correct headerlength
{
metadataLength = Convert.ToInt32(buffer[i]) * 16;
count = 0;
}
}
}//for
if (bNewSong)
{
decompressor = createDecompressor(frame);
bNewSong = false;
}
if (frame != null && decompressor != null)
{
framedec(decompressor, frame);
}
// fine Processing dati RAW
}//Buffer is not full
SHOUTcastStatusProcess();
} while (playbackState != StreamingPlaybackState.Stopped);
} //try
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
if (byteOut != null)
byteOut.Close();
if (socketStream != null)
socketStream.Close();
if (decompressor != null)
{
decompressor.Dispose();
decompressor = null;
}
if (null != request)
request.Abort();
if (null != framestream)
framestream.Dispose();
if (null != bufferedWaveProvider)
bufferedWaveProvider.ClearBuffer();
//if (null != bufferedWaveProviderOut)
// bufferedWaveProviderOut.ClearBuffer();
if (null != mono16bitFsinStream)
{
mono16bitFsinStream.Close();
mono16bitFsinStream.Dispose();
}
if (null != middleStream2)
{
middleStream2.Close();
middleStream2.Dispose();
}
if (null != resampler)
resampler.Dispose();
}
}
public class QueueStream : MemoryStream
{
long ReadPosition = 0;
long WritePosition = 0;
public QueueStream() : base() { }
public override int Read(byte[] buffer, int offset, int count)
{
Position = ReadPosition;
var temp = base.Read(buffer, offset, count);
ReadPosition = Position;
return temp;
}
public override void Write(byte[] buffer, int offset, int count)
{
Position = WritePosition;
base.Write(buffer, offset, count);
WritePosition = Position;
}
public void reSetPosition()
{
WritePosition = 0;
ReadPosition = 0;
Position = 0;
}
}
private void framedec(IMp3FrameDecompressor decompressor, Mp3Frame frame)
{
int Ndecoded_samples = 0;
byte[] dec_buffer = new byte[decSIZE];
Ndecoded_samples = decompressor.DecompressFrame(frame, dec_buffer, 0);
bufferedWaveProvider.AddSamples(dec_buffer, 0, Ndecoded_samples);
NBufferedSamples += Ndecoded_samples;
brcnt_in.incSamples(Ndecoded_samples);
if (Ndecoded_samples > decSIZE)
{
Debug.WriteLine(String.Format("Too many samples {0}", Ndecoded_samples));
}
if (byteOut != null)
byteOut.Write(frame.RawData, 0, frame.RawData.Length);
if (byteOutRaw != null) // as long as we don't have a songtitle, we don't open a new file and don't write any bytes
byteOutRaw.Write(dec_buffer, 0, Ndecoded_samples);
frame = null;
}
private IMp3FrameDecompressor createDecompressor(Mp3Frame frame)
{
IMp3FrameDecompressor dec = null;
if (frame != null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
WaveFormat srcwaveFormat = new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate);
dec = new AcmMp3FrameDecompressor(srcwaveFormat);
bufferedWaveProvider = new BufferedWaveProvider(dec.OutputFormat);// decompressor.OutputFormat
bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(400); // allow us to get well ahead of ourselves
// ------------------------------------------------
//Create an intermediate format with same sampling rate, 16 bit, mono
middlewavformat = new WaveFormat(dec.OutputFormat.SampleRate, 16, 1);
outwavFormat = new WaveFormat(Fs_out, 16, 1);
// wave16ToFloat = new Wave16ToFloatProvider(provider); // I have tried with and without this converter.
wpws = new WaveProviderToWaveStream(bufferedWaveProvider);
//Check middlewavformat.Encoding == WaveFormatEncoding.Pcm;
mono16bitFsinStream = new WaveFormatConversionStream(middlewavformat, wpws);
middleStream2 = new BlockAlignReductionStream(mono16bitFsinStream);
resampler = new MediaFoundationResampler(middleStream2, outwavFormat);
}
return dec;
}

Audio Frames repeating in AVFramework created *.mov file via AVAsset

I am running into some problems trying to create a ProRes encoded mov file using the AVFramework framework, and AVAsset.
On OSX 10.10.5, using XCode 7, linking against 10.9 libraries.
So far I have managed to create valid ProRes files that contain both video and multiple channels of audio.
( I am creating multiple tracks of uncompressed 48K, 16-bit PCM Audio)
Adding the Video Frames work well, and adding the Audio frames works well, or at least succeeds in the code.
However when i play the file back, it appears as though the audio frames are repeated, in 12,13,14, or 15 frame sequences.
Looking at the wave form, from the *.mov it is easy to see the repeated audio...
That is to say, the first 13 or X video frames all contain exactly the same audio, this is then again repeated for the next X, and then again and again and again etc...
The Video is fine, it is just the Audio that appears to be looping/repeating.
The issue appears no matter how many audio channels/ tracks I use as the source, I have tested using just 1 track and also using 4 and 8 tracks.
It is independent of what format and amount of samples i feed to the system, ie using, 720p60, 1080p23, and 1080i59 all exhibit the same incorrect behavior.
well actually the 720p captures appears to repeat the audio frames 30 or 31 times, and the 1080 formats only repeat the audio frames 12 or 13 times,
But i am definitely submitting different audio data to the Audio encode/SampleBuffer create process, as i have logged this in great detail ( tho it is not shown in the code below)
I have tried a number of different things to modify the code and expose the issue, but had no success, hence i am asking here, and hopefully someone can either see an issue with my code or give me some info with regards to this problem.
The code i am using is as follows:
int main(int argc, const char * argv[])
{
#autoreleasepool
{
NSLog(#"Hello, World! - Welcome to the ProResCapture With Audio sample app. ");
OSStatus status;
AudioStreamBasicDescription audioFormat;
CMAudioFormatDescriptionRef audioFormatDesc;
// OK so lets include the hardware stuff first and then we can see about doing some actual capture and compress stuff
HARDWARE_HANDLE pHardware = sdiFactory();
if (pHardware)
{
unsigned long ulUpdateType = UPD_FMT_FRAME;
unsigned long ulFieldCount = 0;
unsigned int numAudioChannels = 4; //8; //4;
int numFramesToCapture = 300;
gBFHancBuffer = (unsigned int*)myAlloc(gHANC_SIZE);
int audioSize = 2002 * 4 * 16;
short* pAudioSamples = (short*)new char[audioSize];
std::vector<short*> vecOfNonInterleavedAudioSamplesPtrs;
for (int i = 0; i < 16; i++)
{
vecOfNonInterleavedAudioSamplesPtrs.push_back((short*)myAlloc(2002 * sizeof(short)));
}
bool bVideoModeIsValid = SetupAndConfigureHardwareToCaptureIncomingVideo();
if (bVideoModeIsValid)
{
gBFBytes = (BLUE_UINT32*)myAlloc(gGoldenSize);
bool canAddVideoWriter = false;
bool canAddAudioWriter = false;
int nAudioSamplesWritten = 0;
// declare the vars for our various AVAsset elements
AVAssetWriter* assetWriter = nil;
AVAssetWriterInput* assetWriterInputVideo = nil;
AVAssetWriterInput* assetWriterAudioInput[16];
AVAssetWriterInputPixelBufferAdaptor* adaptor = nil;
NSURL* localOutputURL = nil;
NSError* localError = nil;
// create the file we are goijmng to be writing to
localOutputURL = [NSURL URLWithString:#"file:///Volumes/Media/ProResAVCaptureAnyFormat.mov"];
assetWriter = [[AVAssetWriter alloc] initWithURL: localOutputURL fileType:AVFileTypeQuickTimeMovie error:&localError];
if (assetWriter)
{
assetWriter.shouldOptimizeForNetworkUse = NO;
// Lets configure the Audio and Video settings for this writer...
{
// Video First.
// Add a video input
// create a dictionary with the settings we want ie. Prores capture and width and height.
NSMutableDictionary* videoSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecAppleProRes422, AVVideoCodecKey,
[NSNumber numberWithInt:width], AVVideoWidthKey,
[NSNumber numberWithInt:height], AVVideoHeightKey,
nil];
assetWriterInputVideo = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings:videoSettings];
adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterInputVideo
sourcePixelBufferAttributes:nil];
canAddVideoWriter = [assetWriter canAddInput:assetWriterInputVideo];
}
{ // Add a Audio AssetWriterInput
// Create a dictionary with the settings we want ie. Uncompressed PCM audio 16 bit little endian.
NSMutableDictionary* audioSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:48000.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
[NSNumber numberWithUnsignedInteger:1], AVNumberOfChannelsKey,
nil];
// OR use... FillOutASBDForLPCM(AudioStreamBasicDescription& outASBD, Float64 inSampleRate, UInt32 inChannelsPerFrame, UInt32 inValidBitsPerChannel, UInt32 inTotalBitsPerChannel, bool inIsFloat, bool inIsBigEndian, bool inIsNonInterleaved = false)
UInt32 inValidBitsPerChannel = 16;
UInt32 inTotalBitsPerChannel = 16;
bool inIsFloat = false;
bool inIsBigEndian = false;
UInt32 inChannelsPerTrack = 1;
FillOutASBDForLPCM(audioFormat, 48000.00, inChannelsPerTrack, inValidBitsPerChannel, inTotalBitsPerChannel, inIsFloat, inIsBigEndian);
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&audioFormat,
0,
NULL,
0,
NULL,
NULL,
&audioFormatDesc
);
for (int t = 0; t < numAudioChannels; t++)
{
assetWriterAudioInput[t] = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
canAddAudioWriter = [assetWriter canAddInput:assetWriterAudioInput[t] ];
if (canAddAudioWriter)
{
assetWriterAudioInput[t].expectsMediaDataInRealTime = YES; //true;
[assetWriter addInput:assetWriterAudioInput[t] ];
}
}
CMFormatDescriptionRef myFormatDesc = assetWriterAudioInput[0].sourceFormatHint;
NSString* medType = [assetWriterAudioInput[0] mediaType];
}
if(canAddVideoWriter)
{
// tell the asset writer to expect media in real time.
assetWriterInputVideo.expectsMediaDataInRealTime = YES; //true;
// add the Input(s)
[assetWriter addInput:assetWriterInputVideo];
// Start writing the frames..
BOOL success = true;
success = [assetWriter startWriting];
CMTime startTime = CMTimeMake(0, fpsRate);
[assetWriter startSessionAtSourceTime:kCMTimeZero];
// [assetWriter startSessionAtSourceTime:startTime];
if (success)
{
startOurVideoCaptureProcess();
// **** possible enhancement is to use a pixelBufferPool to manage multiple buffers at once...
CVPixelBufferRef buffer = NULL;
int kRecordingFPS = fpsRate;
bool frameAdded = false;
unsigned int bufferID;
for( int i = 0; i < numFramesToCapture; i++)
{
printf("\n");
buffer = pixelBufferFromCard(bufferID, width, height, memFmt); // This function to get a CVBufferREf From our device, as well as getting the Audio data
while(!adaptor.assetWriterInput.readyForMoreMediaData)
{
printf(" readyForMoreMediaData FAILED \n");
}
if (buffer)
{
// Add video
printf("appending Frame %d ", i);
CMTime frameTime = CMTimeMake(i, kRecordingFPS);
frameAdded = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (frameAdded)
printf("VideoAdded.....\n ");
// Add Audio
{
// Do some Processing on the captured data to extract the interleaved Audio Samples for each channel
struct hanc_decode_struct decode;
DecodeHancFrameEx(gBFHancBuffer, decode);
int nAudioSamplesCaptured = 0;
if(decode.no_audio_samples > 0)
{
printf("completed deCodeHancEX, found %d samples \n", ( decode.no_audio_samples / numAudioChannels) );
nAudioSamplesCaptured = decode.no_audio_samples / numAudioChannels;
}
CMTime audioTimeStamp = CMTimeMake(nAudioSamplesWritten, 480000); // (Samples Written) / sampleRate for audio
// This function repacks the Audio from interleaved PCM data a vector of individual array of Audio data
RepackDecodedHancAudio((void*)pAudioSamples, numAudioChannels, nAudioSamplesCaptured, vecOfNonInterleavedAudioSamplesPtrs);
for (int t = 0; t < numAudioChannels; t++)
{
CMBlockBufferRef blockBuf = NULL; // *********** MUST release these AFTER adding the samples to the assetWriter...
CMSampleBufferRef cmBuf = NULL;
int sizeOfSamplesInBytes = nAudioSamplesCaptured * 2; // always 16bit memory samples...
// Create sample Block buffer for adding to the audio input.
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
(void*)vecOfNonInterleavedAudioSamplesPtrs[t],
sizeOfSamplesInBytes,
kCFAllocatorNull,
NULL,
0,
sizeOfSamplesInBytes,
0,
&blockBuf);
if (status != noErr)
NSLog(#"CMBlockBufferCreateWithMemoryBlock error");
status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
blockBuf,
TRUE,
0,
NULL,
audioFormatDesc,
nAudioSamplesCaptured,
audioTimeStamp,
NULL,
&cmBuf);
if (status != noErr)
NSLog(#"CMSampleBufferCreate error");
// leys check if the CMSampleBuf is valid
bool bValid = CMSampleBufferIsValid(cmBuf);
// examine this values for debugging info....
CMTime cmTimeSampleDuration = CMSampleBufferGetDuration(cmBuf);
CMTime cmTimePresentationTime = CMSampleBufferGetPresentationTimeStamp(cmBuf);
if (status != noErr)
NSLog(#"Invalid Buffer found!!! possible CMSampleBufferCreate error?");
if(!assetWriterAudioInput[t].readyForMoreMediaData)
printf(" readyForMoreMediaData FAILED - Had to Drop a frame\n");
else
{
if(assetWriter.status == AVAssetWriterStatusWriting)
{
BOOL r = YES;
r = [assetWriterAudioInput[t] appendSampleBuffer:cmBuf];
if (!r)
{
NSLog(#"appendSampleBuffer error");
}
else
success = true;
}
else
printf("AssetWriter Not ready???!? \n");
}
if (cmBuf)
{
CFRelease(cmBuf);
cmBuf = 0;
}
if(blockBuf)
{
CFRelease(blockBuf);
blockBuf = 0;
}
}
nAudioSamplesWritten = nAudioSamplesWritten + nAudioSamplesCaptured;
}
if(success)
{
printf("Audio tracks Added..");
}
else
{
NSError* nsERR = [assetWriter error];
printf("Problem Adding Audio tracks / samples");
}
printf("Success \n");
}
if (buffer)
{
CVBufferRelease(buffer);
}
}
}
AVAssetWriterStatus sta = [assetWriter status];
CMTime endTime = CMTimeMake((numFramesToCapture-1), fpsRate);
if (audioFormatDesc)
{
CFRelease(audioFormatDesc);
audioFormatDesc = 0;
}
// Finish the session
StopVideoCaptureProcess();
[assetWriterInputVideo markAsFinished];
for (int t = 0; t < numAudioChannels; t++)
{
[assetWriterAudioInput[t] markAsFinished];
}
[assetWriter endSessionAtSourceTime:endTime];
bool finishedSuccessfully = [assetWriter finishWriting];
if (finishedSuccessfully)
NSLog(#"Writing file ended successfully \n");
else
{
NSLog(#"Writing file ended WITH ERRORS...");
sta = [assetWriter status];
if (sta != AVAssetWriterStatusCompleted)
{
NSError* nsERR = [assetWriter error];
printf("investoigating the error \n");
}
}
}
else
{
NSLog(#"Unable to Add the InputVideo Asset Writer to the AssetWriter, file will not be written - Exiting");
}
if (audioFormatDesc)
CFRelease(audioFormatDesc);
}
for (int i = 0; i < 16; i++)
{
if (vecOfNonInterleavedAudioSamplesPtrs[i])
{
bfFree(2002 * sizeof(unsigned short), vecOfNonInterleavedAudioSamplesPtrs[i]);
vecOfNonInterleavedAudioSamplesPtrs[i] = nullptr;
}
}
}
else
{
NSLog(#"Unable to find a valid input signal - Exiting");
}
if (pAudioSamples)
delete pAudioSamples;
}
}
return 0;
}
It's a very basic sample that connects to some special hardware ( code for that is left out)
It grabs frames of video and audio, and then there is the processing for the Audio to go from interleaved PCM to the individual Array's of PCM data for each track
and then each buffer is added to the appropriate track, be it video or audio...
Lastly the AvAsset stuff is finished and closed and i exit and clean up.
Any help will be most appreciated,
Cheers,
James
Well i finally found a working solution for this problem.
The solution comes in 2 parts:
I moved from using CMAudioSampleBufferCreateWithPacketDescriptions
to using CMSampleBufferCreate(..) and the appropriate arguments to that function call.
Initially when experiementing with CMSampleBufferCreate i was mis-using some of the arguments and it was giving me the same results as i initially outlined here, but with careful examination of the values i was passing for the CMSampleTimingInfo struct - specifically the duration part, i eventually got everything working correctly!!
So it appears that i was creating the CMBlockBufferRef correctly, but i needed to take more care when using this to create the CMSampleBufRef that i was passing to the AVAssetWriterInput!
Hope this helps someone else, as it was a nasty one for me to solve!
James

arduino uno if string cotains a word

I very new to Arduino Uno and need some advice....so here we go.
I want to use my Arduino to:
1. read my serial data --received as plain text
2. look for a specific word within a line of data received
3. only transmit/print the complete string if it contains the specific "word"
I found this sketch and it works only if I'm looking for char
// Example 3 - Receive with start- and end-markers
const byte numChars = 32;
char receivedChars[numChars];
boolean newData = false;
void setup() {
Serial.begin(9600);
Serial.println("<Arduino is ready>");
}
void loop() {
recvWithStartEndMarkers();
showNewData();
}
void recvWithStartEndMarkers() {
static boolean recvInProgress = false;
static byte ndx = 0;
char startMarker = '<';
char endMarker = '>';
char rc;
while (Serial.available() > 0 && newData == false) {
rc = Serial.read();
if (recvInProgress == true) {
if (rc != endMarker) {
receivedChars[ndx] = rc;
ndx++;
if (ndx >= numChars) {
ndx = numChars - 1;
}
}
else {
receivedChars[ndx] = '\0'; // terminate the string
recvInProgress = false;
ndx = 0;
newData = true;
}
}
else if (rc == startMarker) {
recvInProgress = true;
}
}
}
void showNewData() {
if (newData == true) {
Serial.print("This just in ... ");
Serial.println(receivedChars);
newData = false;
}
}
I think it is better to use String class methods.
you can get the Data using Serial.readString()
then use the String methods for looking for a specific word.
Here are some useful links
https://www.arduino.cc/en/Serial/ReadString
https://www.arduino.cc/en/Reference/StringObject

FMOD API - Can't load sound from memory

int main(int arg, char *args[])
{
FMOD::System *System;
FMOD::Sound *Sound;
FMOD::Channel *Channel = 0;
FMOD_CREATESOUNDEXINFO exinfo;
FMOD_RESULT result;
void *Buffer = 0;
int Key;
ZIPENTRY ze;
HZIP hz = OpenZip("C:\\Users\\Lukas\\Desktop\\Music.pak", "");
FindZipItem(hz, "Recording 1.mp3", true, NULL, &ze);
Buffer = malloc(ze.unc_size);
UnzipItem(hz, ze.index, Buffer, ze.unc_size);
CloseZip(hz);
ZeroMemory(&exinfo, sizeof(FMOD_CREATESOUNDEXINFO));
exinfo.cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
exinfo.length = ze.unc_size;
result = FMOD::System_Create(&System);
result = System->init(32, FMOD_INIT_NORMAL, 0);
result = System->createSound((const char*)Buffer, FMOD_HARDWARE | FMOD_OPENMEMORY, &exinfo, &Sound);
result = System->playSound(FMOD_CHANNEL_FREE, Sound, false, &Channel);
while(TRUE)
{
if(_kbhit())
{
Key = _getch();
if(Key == 27)break;
}
}
Sound->release();
System->close();
System->release();
return 0;
}
Sound is loaded into memory correctly.
but i have problem with System->createSound() function. It throws FMOD_INVALID_PARRAM but everything should be okay. (compared with FMOD examples)
Thanks for answers.
Everything is okay i just forgot to copy the DLL file^^

Resources