Converting raw audio into mp3 with Lame - audio

I'm uploading the file in Fire Storage as an Float32Array, but in order to play it, I have to have it converted either before storing it to firebase to mp3, wav or ogg or after i get the download url.
I chose the 1 option, using Lame.
let mp3Encoder = new lamejs.Mp3Encoder(1, 44100, 128);
let in16Array = Int16Array.from(audioData); //audioData is a Float32Array containing the recorded audio buffer
var mp3Tmp = mp3Encoder.encodeBuffer(in16Array); //encode mp3
//Push encode buffer to mp3Data variable
this.mp3Data.push(mp3Tmp);
//Get end part of mp3
//mp3Tmp = mp3Encoder.flush();
//Write last data to the output data, mp3Data contains now the complete mp3Data
this.mp3Data.push(mp3Tmp);
const blob = new Blob([this.mp3Data], { type: 'audio/mp3' });
However in firebase it is not uploaded as an mp3
const id = this.i + Math.random().toString(36).substring(2) + ".mp3" ;
this.ref = this.afStorage.ref(id);
this.task = this.ref.put(blob); //code for uploading
Any advice what should i try next?

You can have a simple audio service:
import { Injectable } from '#angular/core';
#Injectable()
export class AudioService {
map = new Map();
constructor() { }
preload(src: string): HTMLAudioElement {
if (this.map.has(src)) {
return this.map.get(src);
}
const audio = new Audio();
audio.src = src;
audio.load();
this.map.set(src, audio);
return audio;
}
play(src: string) {
return this.preload(src).play();
}
pause(src: string) {
return this.preload(src).pause();
}
}
Then you just need to preload and set the volumne
this.audio = this.audioService.preload('assets/audio.mp3');
this.audio.volume = this.isMuted ? 1 : 0;
And be happy! To play:
this.audio.play();

Related

Flutter - how do I get the duration of a picked video file on Web?

I've tried everything and failed to determine the duration of a picked video file on Flutter Web. All libraries on pub.dev need a 'File' and this is not available on the web.
I've not been able to get this from the metadata either.
What worked for me, though I am unhappy with the solution approach is:
Widget buildHiddenVideoPlayer(Key key) {
var _videoElement = html.VideoElement();
_videoElement.id = 'htmlHiddenVideoID';
_videoElement.autoplay = true;
_videoElement.muted = true;
_videoElement.loop = true;
_videoElement.controls = false;
_videoElement.defaultMuted = true;
if (fileData != null) {
_videoElement.src = xfile.path; // where xfile is the file picked as XFile
}
// ignore: undefined_prefixed_name
ui.platformViewRegistry.registerViewFactory(
'htmlHiddenVideoID',
(int viewId) => _videoElement,
);
return HtmlElementView(
key: key,
viewType: 'htmlHiddenVideoID',
);
}
This widget is hidden in a 5 x 5 sized box behind a iFrame widget (in my implementation)
Then when I need the duration of a picked file:
VideoElement element = document.getElementById('htmlHiddenVideoID') as VideoElement;
setState(() {
element.src = pickedFile.path;
});
while (true) {
await Future.delayed(const Duration(milliseconds: 200), () {});
duration = element.duration;
if (!duration.isNaN) break; // duration is not returned immediately in many cases
}
element.pause(); // stops the hidden video from playing and consuming resources

Flutter (Dart): Get/Record audio stream from microphone and play it back immediately (real-time)

I need to be able to capture a stream of audio from the microphone and then pass it as argument or read it immediately in order to play it back as audio. To implement this in any other framework there are excellent tools and functions you can use but I need to archive that functionality on Flutter.
Any help or suggestions?
Please try this package flutter_sound.
https://github.com/dooboolab/flutter_sound
Here is reference link
https://medium.com/flutterpub/flutter-sound-plugin-audio-recorder-player-e5a455a8beaf
Creating instance.
FlutterSound flutterSound = new FlutterSound();
Starting recorder with listener.
String path = await flutterSound.startRecorder(null);
print('startRecorder: $path');
_recorderSubscription = flutterSound.onRecorderStateChanged.listen((e) {
DateTime date = new DateTime.fromMillisecondsSinceEpoch(e.currentPosition.toInt());
String txt = DateFormat('mm:ss:SS', 'en_US').format(date);
});
Stop recorder
String result = await flutterSound.stopRecorder();
print('stopRecorder: $result');
if (_recorderSubscription != null) {
_recorderSubscription.cancel();
_recorderSubscription = null;
}
Start player
String path = await flutterSound.startPlayer(null);
print('startPlayer: $path');
_playerSubscription = flutterSound.onPlayerStateChanged.listen((e) {
if (e != null) {
DateTime date = new DateTime.fromMillisecondsSinceEpoch(e.currentPosition.toInt());
String txt = DateFormat('mm:ss:SS', 'en_US').format(date);
this.setState(() {
this._isPlaying = true;
this._playerTxt = txt.substring(0, 8);
});
}
});

how can i get the recorded file length in ionic?

I am using ionic cordova media plugin to record audio from the user's phone. Once recorded i get the file to play and everything is fine. I just want to get the duration of the file and when i play the recorded file i want to show the minutes::seconds of the file being played. How do i do that?
record_audio(){
this.file.startRecord();
this.recording = true;
/*
let record_opt: CaptureAudioOptions = { limit: 1 };
this.mediacapture.captureAudio(record_opt).then(
(data: MediaFile[]) => alert('audio recorded'),
(err: CaptureError)=> alert('cannot record')
);
*/
}
cancel_record(){
this.file.release();
this.show_record_div = false;
this.recordedfile_available = false;
}
stop_recording(){
this.file.stopRecord();
this.recording = false;
this.recordedfile_available = true;
//this.filelength = this.file.getDuration();
}
play_recorded(){
this.file.play();
}
stop_playing(){
this.file.stop();
}
try this.
let duration = this.file.getDuration();
console.log(duration);
// get current playback position
file.getCurrentPosition().then((position) => {
console.log(position);
});

How to play overlapping audio in winrt?

I'm porting an app from wp8 that requires playback of various sounds that can overlap. The only way I've found so far it to use MediaElement, but this doesn't allow overlapping sounds.
QUESTION - what is the easiest and best audio engine to use to play overlapping audio? Ideally I need a small example of how I can do this.
I've looked into WASAPI (http://code.msdn.microsoft.com/windowsapps/Windows-Audio-Session-22dcab6b), but it doesn't look like it supports simple playback ?
Maybe I can wrap the MediaFoundation and call it from winrt? (MediaEngine audio playback on WinRT)
Here is my code now, but when I play a new sound it cuts off the previously playing one rather than blending them.
ThreadUtility.runOnUiThread(
async delegate()
{
// TODO doesn't allow sounds to overlap!
Uri uri = new Uri(R.base_uri, R.raw.URI_PREFIX + resourceId);
StorageFile storageFile =
await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(
uri);
MediaElement element = new MediaElement();
var randomAccessStream = await storageFile.OpenReadAsync();
element.SetSource(randomAccessStream, storageFile.ContentType);
element.Volume = volume;
element.PlaybackRate = pitch;
//TODO element.Pan = pan;
element.Play();
}
);
SOLUTION (as per Filip's answer):
in the page class:
var mediaElements = new LinkedList<MediaElement>();
{
for (int channel = 0; channel < TeacherSoundGroover.NUM_CHANNELS; channel++)
{
var mediaElement = new MediaElement();
mediaElements.add(mediaElement);
// Must be in the tree otherwise it won't overlap!
m_titlePanel.Children.Add(mediaElement);
}
}
m_soundPlayer = new MySoundPlayer(mediaElements);
}
in the MySoundPlayer class:
ThreadUtility.runOnUiThread(
async delegate()
{
Uri uri = new Uri(R.base_uri, R.raw.URI_PREFIX + resourceId);
StorageFile storageFile =
await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(
uri);
if(m_mediaElements != null)
{
int count = m_mediaElements.size();
if (count > 0)
{
int channel = m_nextMediaElementToUse % count;
m_nextMediaElementToUse++;
MediaElement element = m_mediaElements.get(channel);
var randomAccessStream = await storageFile.OpenReadAsync();
element.Stop();
element.DefaultPlaybackRate = rate;
element.SetSource(randomAccessStream, storageFile.ContentType);
element.Volume = volume;
element.Balance = pan;
element.Play();
}
}
}
);
The easiest thing to do is use multiple MediaElement controls, though that might not give you desired results. The best way is to use XAudio2 either directly or through SharpDX if you want to avoid creating a C++/CX WinRT component.

GDI+ Generic Error

When my images are being loaded from my database on my web server, I see the following error:
A generic error occurred in GDI+. at
System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder,
EncoderParameters encoderParams) at
System.Drawing.Image.Save(Stream stream, ImageFormat format) at
MyWeb.Helpers.ImageHandler.ProcessRequest(HttpContext context)
All my code is attempting to do is load the image, can anybody take a look and let me know what I'm doing wrong?
Note - This works if I test it on my local machine, but not when I deploy it to my web server.
public void ProcessRequest(HttpContext context)
{
context.Response.Clear();
if (!String.IsNullOrEmpty(context.Request.QueryString["imageid"]))
{
int imageID = Convert.ToInt32(context.Request.QueryString["imageid"]);
int isThumbnail = Convert.ToInt32(context.Request.QueryString["thumbnail"]);
// Retrieve this image from the database
Image image = GetImage(imageID);
// Make it a thumbmail if requested
if (isThumbnail == 1)
{
Image.GetThumbnailImageAbort myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
image = image.GetThumbnailImage(200, 200, myCallback, IntPtr.Zero);
}
context.Response.ContentType = "image/png";
// Save the image to the OutputStream
image.Save(context.Response.OutputStream, ImageFormat.Png);
}
else
{
context.Response.ContentType = "text/html";
context.Response.Write("<p>Error: Image ID is not valid - image may have been deleted from the database.</p>");
}
}
The error occurs on the line:
image.Save(context.Response.OutputStream, ImageFormat.Png);
UPDATE
I've changed my code to this, bit the issue still happens:
var db = new MyWebEntities();
var screenshotData = (from screenshots in db.screenshots
where screenshots.id == imageID
select new ImageModel
{
ID = screenshots.id,
Language = screenshots.language,
ScreenshotByte = screenshots.screen_shot,
ProjectID = screenshots.projects_ID
});
foreach (ImageModel info in screenshotData)
{
using (MemoryStream ms = new MemoryStream(info.ScreenshotByte))
{
Image image = Image.FromStream(ms);
// Make it a thumbmail if requested
if (isThumbnail == 1)
{
Image.GetThumbnailImageAbort myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
image = image.GetThumbnailImage(200, 200, myCallback, IntPtr.Zero);
}
context.Response.ContentType = "image/png";
// Save the image to the OutputStream
image.Save(context.Response.OutputStream, ImageFormat.Png);
} }
Thanks.
Probably for the same reason that this guy was having problems - because the for a lifetime of an Image constructed from a Stream, the stream must not be destroyed.
So if your GetImage function constructs the returned image from a stream (e.g. a MemoryStream) and then closes the stream before returning the image then the above will fail. My guess is that your GetImage looks a tad like this:
Image GetImage(int id)
{
byte[] data = // Get data from database
using (MemoryStream stream = new MemoryStream(data))
{
return Image.FromStream(data);
}
}
If this is the case then try having GetImage return the MemoryStream (or possibly the byte array) directrly so that you can create the Image instance in your ProcessRequest method and dispose of the stream only when the processing of that image has completed.
This is mentioned in the documentation but its kind of in the small print.

Resources