I'm downloading some small images (about 40 KB each) in my MonoTouch app. The images are downloaded in parallel - the main thread creates the requests and executes them asynchronously.
I use WebRequest.Create to create the HTTP request, and in the completion handler, I retrieve the response stream using response.GetResponseStream(). Then, the following code reads the data from the response stream:
var ms = new MemoryStream();
byte[] buffer = new byte[4096];
int read;
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
When downloading only one image, this runs very fast (50-100 milliseconds, including the wait for the web request). However, as soon as there are several images, say 5-10, these lines need more than 2 seconds to complete. Sometimes the thread spends more than 4 second. Note that I'm not talking about the time needed for response.BeginGetResponse or the time waiting for the callback to run.
Then I tested the following code, it needs less than 100 milliseconds in the same scenario:
var ms = new MemoryStream();
responseStream.CopyTo(ms);
What's the reason for this huge lag in the first version?
The reason I need the first version of the code is that I need the partially downloaded data of the image (specially when the image is bigger). To isolate the performance problem, I removed the code which deals with the partially downloaded image.
I ran the code in Debug and Release mode in the simulator as well as on my iPad 3, and I tried both compiler modes, LLVM and non-LLVM. The lag was there in all configurations, Debug/Release, Device/Simulator, LLVM/non-LLVM.
Most likely it is a limitation in the number of concurrent http network connections:
http://msdn.microsoft.com/en-us/library/system.net.servicepoint.connectionlimit.aspx
Hello maybe not the direct answer to your question but have you tried SDWebImage from Xamarin Component Store? This is an amazing library developed by Olivier Poitrey that out of the box provides asynchronous image downloader, memory + disk image caching and some other benefits, it is really nice.
Hope this helps
Alex
Related
I am using SkiaSharp to load an SVG drawing. It's a site plan, so is reasonably complex, and it takes a long time to load. On my Samsung Galaxy phone, it's about 3 seconds, and during that time the phone completely locks up, which is quite unacceptable.
I am using SkiaSharp.Extended.Svg.SKSvg, but cannot find an asynchronous version of the Load() method. Is there one? Or maybe a better way of doing this?
I am overlaying obejcts on top of the site plan and it has taken me some considerable time to get all the scaling and alignment sorted, so if at all possible, I'd like to stick with SkiaSharp rather than start with something completely different from scratch.
Thanks for your help!
3 seconds does sound a bit long...
It may not be the SVG part, but rather the loading of the file off the file system.
Either way, you might be able to just load the whole think in a background task (pseudocode):
var picture = Task.Run(() => {
var stream = LoadStream();
var svg = LoadSvg(stream);
return svg.GetPicture();
});
I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.
I've tried a couple of Imagemagick wrapper libraries and some S3 libraries. I'm having trouble choosing the best concept due to big performance differences.
I have settled with the node library "gm", which is a joy to work with and very well documented.
As for S3 I have tried both Amazon's own AWS library as well as "S3-Streams"
Edit: I just discovered that the AWS library can handle streams. I suppose this is a new function s3.upload (or have I just missed it?). Anyway, I ditched s3-streams which makes use of s3uploadPart which is much more complicated. After switching library streaming is equal to uploading buffers in my test case.
My testcase is to split a 2MB jpg file into approx 30 512px tiles and send each of the tiles to S3. Imagemagick has a really fast automatic way of generating tiles via the crop command. Unfortunately I have not found any node library that can catch the multi file output from the autogenerated tiles. Instead I have to generate tiles in a loop by call the crop command individually for each tile.
I'll present the total timings before the details:
A: 85 seconds (s3-streams)
A: 34 seconds (aws.s3.upload) (EDIT)
B: 35 seconds (buffers)
C: 25 seconds (buffers in parallell)
Clearly buffers are faster to work with than streams in this case. I don't know if gm or s3-streams has a bad implementation of streams or if I should have tweaked something. For now I'll go with solution B. C is even faster, but eats more memory.
I'm running this on a low end Digital Ocean Ubuntu machine. This is what I have tried:
A. Generate tiles and stream them one by one
I have an array prepared with crop information and s3Key for each tile to generate
The array is looped with "async.eachLimit(1)". I have not succeeded in generating more than one tile at once, hence limit(1).
As the tiles are generated, they are directly streamed to S3
Pseudo code:
async.eachLimit(tiles, 1, function(tile, callback) {
gm(originalFileBuffer)
.crop(tile.width, tile.height, tile.x, tile.y)
.stream()
.pipe(s3Stream({Key: tile.key, Bucket: tile.bucket})) //using "s3-streams" package
.on('finish', callback)
});
B. Generate tiles to buffers and upload each buffer directly with AWS-package
As the tiles are generated to buffers, they are directly uploaded to S3
Pseudo code:
async.eachLimit(tiles, 1, function(tile, callback) {
gm(originalFileBuffer)
.crop(tile.width, tile.height, tile.x, tile.y)
.toBuffer(function(err, buffer) {
s3.upload(..
callback()
)
})
});
C. Same as B, but store all buffers in the tile array for later upload in parallell
Pseudo code:
async.eachLimit(tiles, 1, function(tile, callback) {
gm(originalFileBuffer)
.crop(tile.width, tile.height, tile.x, tile.y)
.toBufer(function(err, buffer) {
tile.buffer = buffer;
callback()
})
});
..this next step is done after finalizing the first each-loop. I don't seem to gain speed by pushing limit to more than 10.
async.eachLimit(tiles, 10, function(tile, callback) {
s3.upload(tile.buffer..
callback()
)
});
Edit: Some more background as per Mark's request
I originally left out the details in the hope that I would get a clear answer about buffer vs stream.
The goal is to serve our app with images in a responsive way via a node/Express API. Backend db is Postgres. Bulk storage is S3.
Incoming files are mostly photos, floor plan drawings and pdf document. The photos needs to be stored in several sizes so I can serve them to the app in a responsive way: thumbnail, low-res, mid-res and original resolution.
Floor plans has to be tiles so I can load them incrementally (scrolling tiles) in the app. A full resolution A1 drawing can be about 50 MPixels.
Files uploaded to S2 spans from 50kB (tiles) to 10MB (floor plans).
The files comes from various directions, but always as streams:
Form posts via web or some other API (SendGrid)
Uploads from the app
Downloaded stream from S3 when uploaded files needs more processing
I'm not keen on having the files temporarily on local disk, hence only buffer vs stream. If I could use the disk I'd use IM's own tile function for really speedy tiling.
Why not local disk?
Images are encrypted before uploading to S3. I don't want unencrypted files to linger in a temp directory.
There is always the issue of cleaning up temp files, with possible orphan files after unintended crashes etc.
After some more tinkering I feel obliged to answer my own question.
Originally I used the npm package s3-streams for streaming to S3. This package uses aws.s3.uploadPart.
Now I found out that the aws package has a neat function aws.s3.upload which takes a buffer or a stream.
After switching to AWS own streaming function there is no time difference between buffer/stream-upload.
I might have used s3-streams in the wrong way. But I also discovered a possible bug in this library (regaring files > 10MB). I posted an issue, but haven't got any answer. My guessing is that the library has been abandoned since the s3.upload function appeared.
So, the answer to my own question:
There might be differences between buffers/streams, but in my test case they are equal, which makes this a non issue for now.
Here is the new "save"-part in the each loop:
let fileStream = gm(originalFileBuffer)
.crop(tile.width, tile.height, tile.x, tile.y)
.stream();
let params = {Bucket: 'myBucket', Key: tile.s3Key, Body: fileStream};
let s3options = {partSize: 10 * 1024 * 1024, queueSize: 1};
s3.upload(params, s3options, function(err, data) {
console.log(err, data);
callback()
});
Thank you for reading.
I'm trying to capture user's audio input from the browser. I have done it with WAV but the files are really big. A friend of mine told me that OGG files are much smaller.
Does anyone knows how to convert WAV to OGG?
I also have the raw data buffer, I don't really need to convert. But I just need the OGG encoder.
Here's the WAV encoder from Matt Diamond's RecorderJS:
function encodeWAV(samples){
var buffer = new ArrayBuffer(44 + samples.length * 2);
var view = new DataView(buffer);
/* RIFF identifier */
writeString(view, 0, 'RIFF');
/* file length */
view.setUint32(4, 32 + samples.length * 2, true);
/* RIFF type */
writeString(view, 8, 'WAVE');
/* format chunk identifier */
writeString(view, 12, 'fmt ');
/* format chunk length */
view.setUint32(16, 16, true);
/* sample format (raw) */
view.setUint16(20, 1, true);
/* channel count */
view.setUint16(22, 2, true);
/* sample rate */
view.setUint32(24, sampleRate, true);
/* byte rate (sample rate * block align) */
view.setUint32(28, sampleRate * 4, true);
/* block align (channel count * bytes per sample) */
view.setUint16(32, 4, true);
/* bits per sample */
view.setUint16(34, 16, true);
/* data chunk identifier */
writeString(view, 36, 'data');
/* data chunk length */
view.setUint32(40, samples.length * 2, true);
floatTo16BitPCM(view, 44, samples);
return view;
}
is there one for OGG?
The Web Audio spec is actually intended to allow exactly this kind of functionality, but is just not close to fulfilling that purpose yet:
This specification describes a high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering. The actual processing will primarily take place in the underlying implementation (typically optimized Assembly / C / C++ code), but direct JavaScript processing and synthesis is also supported.
Here's a statement on the current w3c audio spec draft, which makes the following points:
While processing audio in JavaScript, it is extremely challenging to get reliable, glitch-free audio while achieving a reasonably low-latency, especially under heavy processor load.
JavaScript is very much slower than heavily optimized C++ code and is not able to take advantage of SSE optimizations and multi-threading which is critical for getting good performance on today's processors. Optimized native code can be on the order of twenty times faster for processing FFTs as compared with JavaScript. It is not efficient enough for heavy-duty processing of audio such as convolution and 3D spatialization of large numbers of audio sources.
setInterval() and XHR handling will steal time from the audio processing. In a reasonably complex game, some JavaScript resources will be needed for game physics and graphics. This creates challenges because audio rendering is deadline driven (to avoid glitches and get low enough latency).
JavaScript does not run in a real-time processing thread and thus can be pre-empted by many other threads running on the system.
Garbage Collection (and autorelease pools on Mac OS X) can cause unpredictable delay on a JavaScript thread.
Multiple JavaScript contexts can be running on the main thread, stealing time from the context doing the processing.
Other code (other than JavaScript) such as page rendering runs on the main thread.
Locks can be taken and memory is allocated on the JavaScript thread. This can cause additional thread preemption.
The problems are even more difficult with today's generation of mobile devices which have processors with relatively poor performance and power consumption / battery-life issues.
ECMAScript (js) is really fast for a lot of things, and is getting faster all the time depending on what engine is interpreting the code. For something as intensive as audio processing however, you would be much better off using a low-level tool that's compiled to optimize resources specific to the task. I'm currently using ffmpeg on the server side to accomplish something similar.
I know that it is really inefficient to have to send a wav file across an internet connection just to obtain a more compact .ogg file, but that's the current state of things with the web audio api. To do any client-side processing the user would have to explicitly give access to the local file system and execution privileges for the file to make the conversion.
Edit: You could also use Google's native-client if you don't mind limiting your users to Chrome. It seems like very promising technology that loads in a sandbox and achieves speeds nearly as good natively executed code. I'm assuming that there will be similar implementations in other browsers at some point.
This question has been driving me crazy because I haven't seen anyone come up with a really clean solution, so I came up with my own library:
https://github.com/sb2702/audioRecord.js
Basic usage
audioRecord.requestDevice(function(recorder){
// Request user permission for microphone access
recorder.record(); // Start recording
recorder.stop(); /Stop recording
recorder.exportOGG(function(oggBlob){
//Here's your OGG file
});
recorder.exportMP3(function(mp3Blob){
//Here's your mp3 file
});
recorder.exportWAV(function(wavBlob){
//Here's your WAV file
});
});
Using the continuous mp3 encoding option, it's entirely reasonable to capture and encode audio input entirely in the browser, cross-browser, without a server or native code.
DEMO: http://sb2702.github.io/audioRecord.js/
It's still rough around the edges, but I'll try to clean / fix it up.
NEW: Derivative work of Matt Diamond's recorderjs recording to Ogg-Opus
To encode to Ogg-Opus a file in whole in a browser without special extensions, one may use an Emscripten port of opus-tools/opusenc (demo). It comes with decoding support for WAV, AIFF and a couple of other formats and a re-sampler built in.
An Ogg-Vorbis encoder is also available.
Since the questioner is primarily out for audio compression, they might be also interested in mp3 encoding using lame.
Ok, this might not be a direct answer as it does not say how to convert .wav into .ogg. Then again, why bother with the conversion, when you can the .ogg file directly. This depends on MediaRecorder API, but browsers which support WebAudio usually have this too( Firefox 25+ and Chrome 47+)
github.io Demo
Github Code Source
I wish to record the microphone audio stream so I can do realtime DSP on it.
I want to do so without having to use threads and without having .read() block while it waits for new audio data.
UPDATE/ANSWER: It's a bug in Android. 4.2.2 still has the problem, but 5.01 IS FIXED! I'm not sure where the divide is but that's the story.
NOTE: Please don't say "Just use threads." Threads are fine but this isn't about them, and the android developers intended for AudioRecord to be fully usable without me having to specify threads and without me having to deal with blocking read(). Thank you!
Here is what I have found:
When the AudioRecord object is initialized, it creates its own internal ring type buffer.
When .start() is called, it begins recording to said ring buffer (or whatever kind it really is.)
When .read() is called, it reads either half of bufferSize or the specified number of bytes (whichever is less) and then returns.
If there is more than enough audio samples in the internal buffer, then read() returns instantly with the data. If there is not enough yet, then read() waits till there is, then returns with the data.
.setRecordPositionUpdateListener() can be used to set a Listener, and .setPositionNotificationPeriod() and .setNotificationMarkerPosition() can be used to set the notification Period and Position, respectively.
However, the Listener seems to be never called unless certain requirements are met:
1: The Period or Position must be equal to bufferSize/2 or (bufferSize/2)-1.
2: A .read() must be called before the the Period or Position timer starts counting - in other words, after calling .start() then also call .read(), and each time the Listener is called, call .read() again.
3: .read() must read at least half of bufferSize each time.
So using these rules I am able to get the callback/Listener working, but for some reason the reads are still blocking and I can't figure out how to get the Listener to only be called when there is a full read's worth.
If I rig up a button view to click to read, then I can tap it and if tap rapidly, read blocks. But if I wait for the audio buffer to fill, then the first tap is instant (read returns right away) but subsiquent rapid taps are blocked because read() has to wait, I guess.
Greatly appreciated would be any insight on how I might make the Listener work as intended - in such a way that my listener gets called when there's enough data for read() to return instantly.
Below is the relavent parts of my code.
I have some log statements in my code which send strings to logcat which allows me to see how long each command is taking, and this is how I know that read() is blocking.
(And the buttons in my simple test app also are very doggy slow to respond when it is reading repeatedly, but CPU is not pegged.)
Thanks,
~Jesse
In my OnCreate():
bufferSize=AudioRecord.getMinBufferSize(samplerate,AudioFormat.CHANNEL_CONFIGURATION_MONO,AudioFormat.ENCODING_PCM_16BIT)*4;
recorder = new AudioRecord (AudioSource.MIC,samplerate,AudioFormat.CHANNEL_CONFIGURATION_MONO,AudioFormat.ENCODING_PCM_16BIT,bufferSize);
recorder.setRecordPositionUpdateListener(mRecordListener);
recorder.setPositionNotificationPeriod(bufferSize/2);
//recorder.setNotificationMarkerPosition(bufferSize/2);
audioData = new short [bufferSize];
recorder.startRecording();
samplesread=recorder.read(audioData,0,bufferSize);//This triggers it to start doing the callback.
Then here is my listener:
public OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener()
{
public void onPeriodicNotification(AudioRecord recorder) //This one gets called every period.
{
Log.d("TimeTrack", "AAA");
samplesread=recorder.read(audioData,0,bufferSize);
Log.d("TimeTrack", "BBB");
//player.write(audioData, 0, samplesread);
//Log.d("TimeTrack", "CCC");
reads++;
}
#Override
public void onMarkerReached(AudioRecord recorder) //This one gets called only once -- when the marker is reached.
{
Log.d("TimeTrack", "AAA");
samplesread=recorder.read(audioData,0,bufferSize);
Log.d("TimeTrack", "BBB");
//player.write(audioData, 0, samplesread);
//Log.d("TimeTrack", "CCC");
}
};
UPDATE: I have tried this on Android 2.2.3, 2.3.4, and now 4.0.3, and all act the same.
Also: There is an open bug on code.google about it - one entry started in 2012 by someone else then one from 2013 started by me (I didn't know about the first):
UPDATE 2016: Ahhhh finally after years of wondering if it was me or android, I finally have answer! I tried my above code on 4.2.2 and same problem. I tried above code on 5.01, AND IT WORKS!!! And the initial .read() call is NOT needed anymore either. Now, once the .setPositionNotificationPeriod() and .StartRecording() are called, mRecordListener() just magically starts getting called every time there is data available now so it no longer blocks, because the callback is not called until after enough data has been recorded. I haven't listened to the data to know if it's recording correctly, but the callback is happening like it should, and it is not blocking the activity, like it used to!
http://code.google.com/p/android/issues/detail?id=53996
http://code.google.com/p/android/issues/detail?id=25138
If folks who care about this bug log in and vote for and/or comment on the bug maybe it'll get addressed sooner by Google.
It's late answear, but I think I know where Jesse did a mistake. His read call is getting blocked because he is requesting shorts which are sized same as buffer size, but buffer size is in bytes and short contains 2 bytes. If we make short array to be same length as buffer we will read twice as much data.
The solution is to make audioData = new short[bufferSize/2] If the buffer size is 1000 bytes, this way we will request 500 shorts which are 1000 bytes.
Also he should change samplesread=recorder.read(audioData,0,bufferSize) to samplesread=recorder.read(audioData,0,audioData.length)
UPDATE
Ok, Jesse. I can see where another mistake can be - the positionNotificationPeriod. This value have to be large enought so it won't call the listener too often and we need to make sure that when the listener is called the bytes to read are ready to be collected. If bytes won't be ready when the listener is called, the main thread will get blocked by recorder.read(audioData, 0, audioData.length) call until requested bytes get's collected by AudioRecord.
You should calculate buffer size and shorts array length based on time interval you set - how often you want the listener to be called. Position notification period, buffer size and shorts array length all have to be adjusted correctly. Let me show you an example:
int periodInFrames = sampleRate / 10;
int bufferSize = periodInFrames * 1 * 16 / 8;
audioData = new short [bufferSize / 2];
int minBufferSize = AudioRecord.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize < minBufferSize) bufferSize = minBufferSize;
recorder = new AudioRecord(AudioSource.MIC, sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize);
recorder.setRecordPositionUpdateListener(mRecordListener);
recorder.setPositionNotificationPeriod(periodInFrames);
recorder.startRecording();
public OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
samplesread = recorder.read(audioData, 0, audioData.length);
player.write(short2byte(audioData));
}
};
private byte[] short2byte(short[] data) {
int dataSize = data.length;
byte[] bytes = new byte[dataSize * 2];
for (int i = 0; i < dataSize; i++) {
bytes[i * 2] = (byte) (data[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (data[i] >> 8);
data[i] = 0;
}
return bytes;
}
So now a bit of explanation.
First we set how often the listener have to be called to collect audio data (periodInFrames). PositionNotificationPeriod is expressed in frames. Sampling rate is expressed in frames per second, so for 44100 sampling rate we have 44100 frames per second. I divided it by 10 so the listener will be called every 4410 frames = 100 milliseconds - that's reasonable time interval.
Now we calculate buffer size based on our periodInFrames so any data won't be overriden before we collect it. Buffer size is expressed in bytes. Our time interval is 4410 frames, each frame contains 1 byte for mono or 2 bytes for stereo so we multiply it by number of channels (1 in your case). Each channel contains 1 byte for ENCODING_8BIT or 2 bytes for ENCODING_16BIT so we multiply it by bits per sample (16 for ENCODING_16BIT, 8 for ENCODING_8BIT) and divide it by 8.
Then we set audioData length to be half of the bufferSize so we make sure that when the listener gets called, bytes to read are already there waiting to be collected. That's because short contains 2 bytes and bufferSize is expressed in bytes.
Then we check if bufferSize is large enought to succesfully initialize AudioRecord object, if it's not then we set bufferSize to it's minimal size - we don't need to change our time interval or audioData length.
In our listener we read and store data to short array. That's why we use audioData.length instead buffer size, because only audioData.length can tell us the number of shorts the buffer contains.
I had it working some time ago so please let me know if it will work for you.
I'm not sure why you're avoiding spawning separate threads, but if it's because you don't want have to deal with coding them properly, you can use .schedule on a Timer object after each .read, where the time interval is set to the time it takes to get your buffer filled (number of samples in buffer / sampleRate). Yes I know this is using a separate thread, but this advice was given assuming that the reason you were avoiding using threads was to avoid having to code them properly.
This way, the longest time it can possibly block the thread for should be neglible. But I don't know why you'd want to do that.
If the above reason is not why you're avoiding using separate threads, may I ask why?
Also, what exactly do you mean by realtime? Do you intend to playback the affected audio using, let's say, an AudioTrack? Because the latency on most Android devices is pretty bad.