How does one Capture MP3s in J2ME? - java-me

I was able to capture audio in the WAV format through Manager.createPlayer("capture://audio"). However, is there a way to capture audio in the MP3 format in J2ME?

It will likely depend on the platform in question, you would have to check the different device implementations you want to support.

Rory, what do you mean?
I was really asking for the String for the createPlayer(String s) method. J2ME automatically records to a WAV file, but I was wondering if I could request it to record to MP3. Of course if that MP3 argument did not work, a MediaException would be thrown. Please forgive me if it seems that I am missing the point of your response.

Related

Read audio channel data from video file nodejs

I want to read audio frequency data from video mp4 file get it as array (not mp3 file) that’s it, no need to do anything fancy
Currently I ‘m doing it it with webaudio api in javascript
However i need to do it with nodejs
i want to make this as fast as possible
I don’t care about the video frames data Or anything else
I’m trying with ffmpeg it seems very hard
if there is another way with fs maybe ?!!
Thank you in advance

rtmp audio message(0x08) format (mp3)

im trying to write a little client for rtmp(audio only). so far i got the communication working (red5 server) but now im stuck with the audio data.
the server is sending in MP3 44KHz 16bit stereo.
i get my Audiomessage which consists of the byte identifying the codec (0x2f) and the audio data which looks for example like this
ff:fb:92:64:eb:80:03:98:58:d2:e9:26:1b:7e:5d:e7:4a:1a:19:26:5c:8b:89:07:47:44:98:6b:91:2d:9c:28:b4:33:15:70:82:c9:29:87:8d:e4:8f:31:83:84:7b:e5:82:b5:57:62:00:02:e5:bb:f1:86:15:7a:8f:da:9e:ca:4f:83:9d:0a:c4:56:7b:b3:3d:56:43:ba:2b:28:b8:9d:0c:e1:82:0c:08:36:24:f3:39:67:54:b7:41:d9:8e:ef:36:96:56:22:d2:b9:9f:ae:40:43:8e:ea:39:52:0c:a4:48:25:02:54:91:c7:35:37:2d:be:f2:37:23:61:65:35:d9:0f:aa:18:b4:37:d9:d4:c8:68:21:3c:bd:ea:c1:d0:98:df:eb:96:59:99:88:09:37:36:c3:8b:47:80:64:84:41:ba:35:ea:a6:0a:d6:74:9e:09:f6:a5:d7:3f:1f:53:d8:fb:8d:d9:d3:f8:ee:c7:c1:68:25:25:8e:ae:6a:1c:08:52:9d:58:cf:cf:87:c1:ba:a4:f0:63:76:b0:b4:65:79:1b:3b:21:5f:2f:b5:7a:18:43:af:f7:fd:15:0c:87:c9:73:54:95:22:94:cc:cb:e3:da:4d:e0:f3:8a:95:69:69:eb:32:71:57:08:49:76:e0:f3:84:8c:4b:4c:84:6b:5d:7a:c8:c9:d7:df:d5:e2:68:bb:5f:6c:9f:ba:f4:0a:6c:6e:51:8a:b3:59:9a:07:0c:e4:2a:9d:ec:d1:99:53:48:f2:8b:22:b2:d3:bf:e1:5b:9f:ee:49:9f:2c:ee:63:1f:6f:da:90:e7:65:00:55:99:97:77:b9:e8:97:43:81:fd:32:e4:81:20:d0:78:f5:4f:59:47:39:f2:57:5d:f4:d5:91:48:c9:45:10:52:49:4d:04:87:6b:0e:a5:72:ed:34:74:08:93:5b:8a:54:3a:d9:7e:53:8f:c7:5e:b1:99:f3:55:63:72:49:99:55:3a:b8:0d:73:3b:2a:ea:9a:b5:32:d2:3b:61:c2:4e:e9:56:78:99:14:4a:a7:46:f4:ee:ae:6f:ff:c8:85:2d:07:68:ad:e2:84:dd:0a:bd:2e:93:12:43
i dont find a little thing about the data format. as the first byte is always 0xff i assume every chunk of audio data has a little header describing its contents.
the rtmp spec from adobe doesnt loose a single word about the format of the audio message package (just two lines saying its an audio message... wow).
does anyone know the format for the audio messages or at least a source where i find something?
The Adobe spec doesn't document the elementary stream formats because they are covered in their own documents, and usually quite large. MP3 is covered by ISO/IEC 11172-3.
There is a good rundown available here:
http://www.mpgedit.org/mpgedit/mpeg_format/mpeghdr.htm

ffmpeg - Can I draw an audio channel as an image?

I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.

Include simple sound in iphone app

I searched many questions - but no one seems to be giving simplest, most uniform approach, hence please do not close as duplicate.
My requirement is simple: I have quiz app.
I want to include:
background music that plays continually - probably more than one
audio.
I need occassional sounds played at specific events - they
are very short in duration. Maybe 4-5 in number.
What sound format do I use? [aac etc]
How do I produce it? (optionally, get it from internet, if free)
What is the best approach to incorporate it? [audioplayback, openal etc)
Forgive me if this is quite stupid, but I am going very generic here and can't seem to find it.
Thanks for the help!
For sound format, use AAC or uncompressed 16-bit little endian in a CAF container (avoid mp3 since it's difficult to make it loop cleanly). You can convert using the command line tool 'afconvert':
Compressed:
afconvert -f caff -d aac sourcefile.wav destfile.caf
Uncompressed 16-bit:
afconvert -f caff -d LEI16 sourcefile.wav destfile.caf
For production, either record it yourself (using an audio program such as Audacity), get a professional to do it, or buy royalty free sounds/music.
To incorporate it, use AVAudioPlayer for music and OpenAL for sounds. OpenAL is difficult to use and doesn't decode compressed audio on its own, so you may want to use an audio library such as https://github.com/kstenerud/ObjectAL-for-iPhone

Video capture in Direct show samples (AMCap)

I am using Direct show samples (AMCap) to capture live video streams. Video seems to be perfect but it does not capture audio within it.
I am not able to find out the reason. Can anyone please help me to solve this problem?
Thank You.
Earlier SDKs, e.g. Microsoft® DirectX® 9.0 SDK Update (October 2004), contained more samples including audio capture, e.g.:
\DirectShow\Samples\C++\DirectShow\Capture\AudioCap
AudioCap
NOTE: In order to write .WAV files to your disk, you must first build and register the WavDest filter in the
Samples\Multimedia\DirectShow\Filters\WAVDest directory. Without this
filter, you may audition audio input, but you will not be able to
write it to your disk.

Resources