How do I determine the length of an audio recording I am recording programaticaly? My current solution is to just time the start/stop recording events on the User Interface (literally the time the user hits record, then hits stop recording.) Given a .aac audio file, is there some library call in Objective-C or Python to determine its length?
On the iPad:
totalFrames = format->mFramesPerPacket * recordedPackets;
lengthInSeconds = totalFrames / format->mSampleRate
Related
I have a source mp4 file with duration=17sec (for example).
When i convert video to Apple HLS using AWS MediaConvert, i get the m3u8 file with duration 18sec .
I mean #EXTINF:18 tag in m3u8.
I use ABR mode.
SegmentControl settings are default
{
"OutputGroups": [
{
"Name": "Apple HLS",
"OutputGroupSettings": {
"Type": "HLS_GROUP_SETTINGS",
"HlsGroupSettings": {
"SegmentLength": 10,
"MinSegmentLength": 0,
"TargetDurationCompatibilityMode": "LEGACY",
"SegmentLengthControl": "GOP_MULTIPLE",
"SegmentControl": "SEGMENTED_FILES"
}
}
]
}
How to fix it? I tried to change different HlsGroupSettings but result remains the same
Thanks for your post. MediaConvert has a default setting to use whole integers for manifest durations. This means that if the source asset has even 1 extra frame of video or audio, the service will add a whole second to the segment duration. This may be why your output is appearing to be +1s longer than expected. You can change this segment duration setting to 'floating point' duration under "HLS Output Group / Avanced/ Manifest duration format". Try this and you might find the last segment is only slightly longer than expected.
You can ensure the source asset is exactly XX seconds long using the "input Clips" feature to specify a specific start + end timecode (HH:MM::SS:FF).
For the widest compatibility with streaming players we recommend using 1 second as the minimum segment duration. Very short segments (<1s) sometimes get skipped by some players or flagged by stream quality checking products. If a few extra frames of source content are found to exist, they will get added to the previous segment.
When measuring durations, be sure to check the actual media track durations and not just the file header metadata. Utilities such as ffprobe or mediainfo (use the --full flag) are helpful for this. The pts_time for each frame will indicate when it is supposed to start. The pkt_duration_time will indicate the duration of each frame.
I need to get waveform data from the wav file,but my code returns not right waveform (i compare my results with waveform from fl studio)
This is my code:
path = "/storage/emulated/0/FLM User
Files/My Samples/808 (16).wav";
waveb = FileUtil.readFile(path);
waveb = waveb.substring((int) (waveb.indexOf("data") + 4), (int)(waveb.length()));
byte[] b = waveb.getBytes();
for(int i= 0; i < (int)(b.length/4); i++) {
map = new HashMap<>();
map.put("value", String.valueOf((long)((b[i*4] & 0xFF) +
((b[i*4+1] & 0xFF) << 8))));
map.put("byte", String.valueOf((long)(b[i*4])));
l.add(map);
}
listview1.setAdapter(new
Listview1Adapter(l));
( (BaseAdapter)listview1.getAdapter()).notifyDataSetChanged();
My results:
Fl studio mobile results:
I'm not sure I can help, given what I know off of the top of my head, but perhaps this will trigger some ideas in your search for a solution.
It looks to me like you are assuming the sound file is 16-bit stereo, little-endian, and that you are only attempting to inspect one track of the stereo frame. Can you confirm this?
There's at least one way this plan could go awry: the .wav header may be an odd number of bytes in length, and you might not be properly parsing frame boundaries as a result. As an experiment, maybe try adding a different increment when you reference the b[] array? For example b[i4 + 1] and b[i4 + 2] instead of b[i4] and b[i4 + 1]. This won't solve the general problem of parsing .wav headers, but it could at least get you closer to understanding the situation.
It sure looks like Java's AudioInputStream is not accessible in Android, and all searches that I have that ask if there is an Android equivalent are turning up unanswered.
I've used AudioTrack for the playback of raw PCM, but I don't know an Android equivalent for reading wav files. The AudioRecord class and read() methods look interesting as the read methods store PCM data in a short array, but I've never used them, and they seem to be hard-coded to the microphone for input.
There used to be a Google Group: andraudio#googlegroups.com. IDK if it is still around. I used to go there and occasionally ask about things.
Maybe there is code you can use from Oboe or libGDX? The latter makes use of OpenAL and is for cross-platform development, with Android as one of the target platforms. I have not looked into either for this question.
If you do find the answer, it would be great to post it as a solution. This seems to be a matter that many have tried to solve and given up on.
I`m trying to decode h264 video using HW with Stagefright library.
i have used an example in here. Im getting decoded data in MedaBuffer. For rendering MediaBuffer->data() i tried AwesomeLocalRenderer in AwesomePlayer.cpp.
but picture in screen are distorted
Here is The Link of original and crashed picture.
And also tried this in example`
sp<MetaData> metaData = mVideoBuffer->meta_data();
int64_t timeUs = 0;
metaData->findInt64(kKeyTime, &timeUs);
native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
err = mNativeWindow->queueBuffer(mNativeWindow.get(),
mVideoBuffer->graphicBuffer().get(), -1);`
But my native code crashes. I can`t get real picture its or corrupted or it black screen.
Thanks in Advance.
If you are using a HW accelerated decoder, then the allocation on the output port of your component would have been based on a Native Window. In other words, the output buffer is basically a gralloc handle which has been passed by the Stagefright framework. (Ref: OMXCodec::allocateOutputBuffersFromNativeWindow). Hence, the MediaBuffer being returned shouldn't be interpreted as a plain YUV buffer.
In case of AwesomeLocalRenderer, the framework performs a software color conversion when mTarget->render is invoked as shown here. If you trace the code flow, you will find that the MediaBuffer content is directly interpreted as YUV buffer.
For HW accelerated codecs, you should be employing AwesomeNativeWindowRenderer. If you have any special conditions for employing AwesomeLocalRenderer, please do highlight the same. I can refine this response appropriately.
P.S: For debug purposes, you could also refer to this question which captured the methods to dump the YUV data and analyze the same.
I want to record the call in asterisk with the loop one minute for each audio file. That means there are many audio recording files per one call but they have the length of 1 minute. For example, the recording file names for the call in 10 minuts are: audioRec1.wav, audioRec2.wav, audioRec3.wav, audioRec4.wav ... audioRec10.wav.
Is it possible to do this in asterisk? if not, are there any program doing this job?
Thank you very much!
You have 4 options
1) Create external controller app, which will remember all record states and if callmore then minute do stopmonitor/startmonitor.
Connect via ami http://www.voip-info.org/wiki/view/Asterisk+manager+API
Stop call monitor http://www.voip-info.org/wiki/view/Asterisk+Manager+API+Action+StopMonitor
Change name http://www.voip-info.org/wiki/view/Asterisk+Manager+API+Action+ChangeMonitor
Start new monitoring http://www.voip-info.org/wiki/view/Asterisk+Manager+API+Action+Monitor
2) Change asterisk source code to do what you expect. Very complex,require guru levle in asterisk.
3) Record by asterisk Record app using chanspy in a loop, always changing files
http://www.voip-info.org/wiki/view/Asterisk+cmd+Record
http://www.voip-info.org/wiki/view/Asterisk+cmd+ChanSpy
4) split file after end of file. this is very easy method.
I want to split a Multichannel (2,8 or 16) wav file into its channels and save every channel in another wav-File.
So far I've accomplished to get libsox up and running in my c++, objective c++ project.
Libsox isnt well documented and there aren many examples on how to do it :(
I started by first openning the Inputfile
sox_format_t * in, * out;
assert(sox_init() == SOX_SUCCESS);
assert(in = sox_open_read((const char*)filename.c_str(),NULL,NULL,NULL));
Now I must find a way to get the number of channels of this file. Then I have to create the same amount of out-files and save every channel itself inside them.
How to do?
Thanks!
I think I will do it the old fashioned way.
Determine the channel count of the file.
Determine the length of the Data block.
Length of Data block / channelCount = Size of each channelBlock
Channels are saved like that inside a WavFile-Datablock (for 4Channel WavFile)
CH1/Ch2/Ch3/Ch4 CH1/Ch2/Ch3/Ch4.
I run through the datablock extract the channels and put them into a mono-wav-file