ScriptProcessorNode Memory leak - memory-leaks

I'm working on a large project that relies heavily on web audio and ScriptProcessorNodes. After some recent intermittent crashing I've tracked down the problems to memory leaking from the ScriptProcessorNodes. I've read many many tutorials, guides, bug reports, etc.. and none of it seems to be helping. Here's a small toy example:
http://jsfiddle.net/6YBWf/
var context = new webkitAudioContext();
function killNode(node)
{
return function()
{
node.disconnect();
node.onaudioprocess = null;
node = null;
}
}
function noise()
{
var node = context.createScriptProcessor(1024, 0, 1);
node.onaudioprocess = function(e)
{
var output = e.outputBuffer.getChannelData(0);
for(var i = 0; i < 1024; ++i)
{
output[i] = (Math.random() * 2 - 1) * 0.001;
}
}
node.connect(context.destination);
setTimeout(killNode(node), 100);
}
function generateNoise()
{
for(var i = 0; i < 99999; ++i)
{
noise();
}
}
generateNoise();
This will spin up many nodes and then disconnect them and set their onaudioprocess to null. From what I've read, given that I'm not retaining any references to them, shouldn't they get garbage collected?
My computer memory jumps up to about 16% and settles down to 14% a bit later but never goes below that. Can anyone show me an example similar to this where the nodes get properly collected? Is there something obvious I'm missing?

This has been confirmed as a regression in Chrome:
https://code.google.com/p/chromium/issues/detail?id=379753

Related

Provide progress on data processing

I've written a Node package that performs some intensive data processing. I'm able to watch progress via console.log, but I'd like to provide the end user with some way of monitoring it in an event-driven way. Basically, it would be like returning a promise, except instead of one then event it would have an event for each "step", and then finally resolve with the data.
Unfortunately, I don't know enough about Node streams (which I'm guessing is the thing I need) to do this. Can I get a few pointers? How can I create a stream which is updated every time there is, say, 1% more progress, and then finally gives the computed data?
EDIT: As an example, consider this loop:
for(let i = 0; i < N; i++) {
intensiveFunction();
console.log(`${i} of ${N} completed`);
}
What I want to do instead is
for(let i = 0; i < N; i++) {
intensiveFunction();
// send user a signal that i/N of the task has been completed
}
You don't need to use streams, You can use an EventEmitter, and emit the current progress, or any event you may want.
my-package.js
const EventEmitter = require('events');
// Create a class that extends from EventEmitter
// And now you can emit events when something happens, e.g., progress update
class MyPackage extends EventEmitter {
async intensiveFunction() {
// Something
}
async process() {
for(let i = 0; i < N; i++) {
await this.intensiveFunction();
this.emit('step', i, N); // Or send progress in % or whatever you want
}
}
}
module.exports = MyPackage;
index.js
const MyPackage = require('my-package');
const package = new MyPackage();
package.on('step', (step, total) => console.log(`${step}/${total}`));
package.process();
You can either provide a full events API, or mix it with promises. Meaning process can resolve once is done, or you can emit an end event, or do both.
async process() {
for(let i = 0; i < N; i++) {
await this.intensiveFunction();
this.emit('step', i, N); // Or send progress in % or whatever you want
}
// Emit the end event
this.emit('end', result);
// And resolve the promise
return result; // Whatever result is
}

Is it normal to solve a TSP with GA(Genetic Algorithyms) implementation takes much time?

I am working on GA for a project. I am trying to solve Travelling Salesman Problem using GA. I used array[] to store data, I think Arrays are much faster than List. But for any reason it takes too much time. e.g. With MaxPopulation = 100000, StartPopulation=1000 the program lasts to complete about 1 min. I want to know if this is a problem. If it is, how can I fix this?
A code part from my implementation:
public void StartAsync()
{
Task.Run(() =>
{
CreatePopulation();
currentPopSize = startPopNumber;
while (currentPopSize < maxPopNumber)
{
Tour[] elits = ElitChromosoms();
for (int i = 0; i < maxCrossingOver; i++)
{
if (currentPopSize >= maxPopNumber)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility >= mut)
{
child = Mutation(child);
}
population[currentPopSize] = child;
currentPopSize++;
}
progress = currentPopSize * 100 / population.Length;
this.Progress = progress;
GC.Collect();
}
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
});
}
In here "elits" are the chromosoms that have greater fit value than the average fit value of the population.
Scientific papers suggest smaller population. Maybe you should follow what is written by the other authors. Having big population does not give you any advantage.
TSP can be solved by GA, but maybe it is not the most efficient approach to attack this problem. Look at this visual representation of TSP-GA: http://www.obitko.com/tutorials/genetic-algorithms/tsp-example.php
Ok. I have just found a solution. Instead of using an array with size of maxPopulation, change new generations with the old and bad one who has bad fitness. Now, I am working with a less sized array, which has length of 10000. The length was 1,000.000 before and it was taking too much time. Now, in every iteration, select best 1000 chromosomes and create new chromosomes using these as parent and replace to old and bad ones. This works perfect.
Code sample:
public void StartAsync()
{
CreatePopulation(); //Creates chromosoms for starting
currentProducedPopSize = popNumber; //produced chromosom number, starts with the length of the starting population
while (currentProducedPopSize < maxPopNumber && !stopped)
{
Tour[] elits = ElitChromosoms();//Gets best 1000 chromosoms
Array.Reverse(population);//Orders by descending
this.Best = elits[0];
//Create new chromosom as many as the number of bad chromosoms
for (int i = 0; i < population.Length - elits.Length; i++)
{
if (currentProducedPopSize >= maxPopNumber || stopped)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility <= mut)
{
child = Mutation(child);
}
population[i] = child;//Replace new chromosoms
currentProducedPopSize++;//Increase produced chromosom number
}
progress = currentProducedPopSize * 100 / maxPopNumber;
this.Progress = progress;
GC.Collect();
}
stopped = false;
this.Best = population[population.Length - 1];
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
}
Tour[] ElitChromosoms()
{
Array.Sort(population);
Tour[] elits = new Tour[popNumber / 10];
Array.Copy(population, elits, elits.Length);
return elits;
}

Browserify and extending Buffer.prototype

I'm trying to use some node.js modules in a chrome packaged app. (I'm talking to the serial port)
I've extended the Buffer prototype to add the 'indexOf' method.
I'm using Browserify, and what seems to be happening is it doesn't pick up my prototype extension. My Buffers end up being Uint8Arrays without indexOf available.
Is there a trick to extending Buffer in a way that Browserify will pick up?
My extension looks like this, but I've also tried npm packages that do the same thing (the below code was lifted from one), so I think the problem isn't necessarily in my code:
Buffer.indexOf = function(haystack, needle, i) {
if (!Buffer.isBuffer(needle)) {
needle = new Buffer(needle);
}
if (typeof i === 'undefined') {
i = 0;
}
var l = haystack.length - needle.length + 1;
while (i < l) {
var good = true;
for (var j = 0, n = needle.length; j < n; j++) {
if (haystack.get(i + j) !== needle.get(j)) {
good = false;
break;
}
}
if (good) {
return i;
}
i++;
}
return -1;
};
Buffer.prototype.indexOf = function(needle, i) {
return Buffer.indexOf(this, needle, i);
}
Is there a trick to extending Buffer in a way that Browserify will pick up?
I doubt it, since browserify has its own implementation for Buffer.
But what's worse is that any other module that uses your module is affected by your change to Buffer.prototype. Instead of extending the prototype of a core component, you should create a helper function instead.

Solrnet/Tomcat 7 - writing several large documents memory consumption growing alarmingly

I am writing very large (both size and count) documents to a solr index(100s of fields with many numeric and some text fields) . I am using Tomcat 7 on W7 x64.
Based on #Maurico's suggestion when indexing millions of documents I parallelize the write operation (see code sample below)
The write to Solr method is being "Task"ed out from a main loop (Note: I task it out since the write op takes too long and holds up the main app)
The problem is that the memory consumption grows uncontrollably, the culprit is the solr write operations (when I comment them out the run works fine). How do I handle this issue? via Tomcat? or SolrNet?
Thanks for your suggestions.
//main loop:
{
:
:
:
//indexDocsList is the list I create in main loop and "chunk" it out to send to the task.
List<IndexDocument> indexDocsList = new List<IndexDocument>();
for(int n = 0; n< N; n++)
{
indexDocsList.Add(new IndexDocument{X=1, Y=2.....});
if(n%5==0) //every 5th time we write to solr
{
var chunk = new List<IndexDocument>(indexDocsList);
indexDocsList.Clear();
Task.Factory.StartNew(() => WriteToSolr(chunk)).ContinueWith(task => chunk.Clear());
GC.Collect();
}
}
}
private void WriteToSolr(List<IndexDocument> indexDocsList)
{
try
{
if (indexDocsList == null) return;
if (indexDocsList.Count <= 0) return;
int fromInclusive = 0;
int toExclusive = indexDocsList.Count;
int subRangeSize = 25;
//TO DO: This is still leaking some serious memory, need to fix this
ParallelLoopResult results = Parallel.ForEach(Partitioner.Create(fromInclusive, toExclusive, subRangeSize), (range) =>
{
_solr.AddRange(indexDocsList.GetRange(range.Item1, range.Item2 - range.Item1));
_solr.Commit();
});
indexDocsList.Clear();
GC.Collect();
}
catch (Exception ex)
{
logger.ErrorException("WriteToSolr()", ex);
}
finally
{
GC.Collect();
};
return;
}
You are manually committing after each batch. This is the most expensive operation for Solr. In your case, I would recommend autoCommit every x seconds and do a softAutoCommit (Solr 4.0) feature. That should take care of Solr's side of things. You'll also have to tweak your JVM garbage collection options so that you don't get pause the world GC.

help needed for making a calendar like MS Outlook?

i am doing work on an app like MS Outlook Calender where user can put events etc.
i am having problem with event object layout according to size etc. as user can drag and re size the event object in MS outlook calender and the size of event objects sets automatically.
i need the algorithm for doing so i have write my own but there are several problems help needed.
this screen shot will show the event object arrangement that is dynamic.
here is the ans
you can go for rectangle packing algorithm but keep in mind the events should be sorted w.r.t time and date and only horizontal packing will work for you
here is the rectangle packing algo
Since you're using Flex, this isn't a direct answer to your question, but it will hopefully set you down the right path.
Try taking a look at how FullCalendar's week and day views implement this. FullCalendar is a jQuery plugin that renders a calendar which does exactly what you're looking for.
You'll have to extract the rendering logic from FullCalendar and translate it to your project in Flex. I know JavaScript and ActionScript are very similar, but I've never used Flex — sorry I can't be more help in that area.
FullCalendar's repo is here. Specifically, it looks like AgendaView.js is the most interesting file for you to look at.
I think you are asking about a general object layout algorithm, right?
I am quite sure that this is a NP-complete problem: Arrange a set if intervals, each defined by a start and end as few columns as possible.
Being NP-complete means, that your best shot is probably trying out all possible arrangements:
find clusters in your objects -- the groups where you have something to do, where intervals do overlap.
for each cluster do
let n be the number of objects in the cluster
if n is too high (like 10 or 15), stop and just draw overlapping objects
generate all possible orderings of the objects in the cluster (for n objects, these are n! possible combinations, i.e. 6 objects, 120 possible orderings)
for each ordering lay out the objects in a trivial manner: loop through the elements and place them in an existing column if it fits there, start a new column if you need one.
keep the layout with the least columns
Here is how I did it:
Events are packet into columns variable by day (or some other rule)
Events in one column are further separated into columns, as long as there is a continuous intersection on the Y-axis.
Events are assigned their X-axis value (0 to 1) and their X-size (0 to 1)
Events are recursively expanded, until the last of each intersectioned group (by Y and X axis) hits the column barrier or another event, that has finished expanding.
Essentially it is a brute force, but works fairly quickly, since there are not many events that need further expanding beyond step 3.
var physics = [];
var step = 0.01;
var PackEvents = function(columns){
var n = columns.length;
for (var i = 0; i < n; i++) {
var col = columns[ i ];
for (var j = 0; j < col.length; j++)
{
var bubble = col[j];
bubble.w = 1/n;
bubble.x = i*bubble.w;
}
}
};
var collidesWith = function(a,b){
return b.y < a.y+a.h && b.y+b.h > a.y;
};
var intersects = function(a,b){
return b.x < a.x+a.w && b.x+b.w > a.x &&
b.y < a.y+a.h && b.y+b.h > a.y;
};
var getIntersections = function(box){
var i = [];
Ext.each(physics,function(b){
if(intersects(box,b) && b.x > box.x)
i.push(b);
});
return i;
};
var expand = function(box,off,exp){
var newBox = {
x:box.x,
y:box.y,
w:box.w,
h:box.h,
collision:box.collision,
rec:box.rec
};
newBox.x += off;
newBox.w += exp;
var i = getIntersections(newBox);
var collision = newBox.x + newBox.w > 1;
Ext.each(i,function(n){
collision = collision || expand(n,off+step,step) || n.collision;
});
if(!collision){
box.x = newBox.x;
box.w = newBox.w;
box.rec.x = box.x;
box.rec.w = box.w;
}else{
box.collision = true;
}
return collision;
};
Ext.each(columns,function(column){
var lastEventEnding = null;
var columns = [];
physics = [];
Ext.each(column,function(a){
if (lastEventEnding !== null && a.y >= lastEventEnding) {
PackEvents(columns);
columns = [];
lastEventEnding = null;
}
var placed = false;
for (var i = 0; i < columns.length; i++) {
var col = columns[ i ];
if (!collidesWith( col[col.length-1], a ) ) {
col.push(a);
placed = true;
break;
}
}
if (!placed) {
columns.push([a]);
}
if (lastEventEnding === null || a.y+a.h > lastEventEnding) {
lastEventEnding = a.y+a.h;
}
});
if (columns.length > 0) {
PackEvents(columns);
}
Ext.each(column,function(a){
a.box = {
x:a.x,
y:a.y,
w:a.w,
h:a.h,
collision:false,
rec:a
};
physics.push(a.box);
});
while(true){
var box = null;
for(i = 0; i < physics.length; i++){
if(!physics[i].collision){
box = physics[i];
break;
}
}
if(box === null)
break;
expand(box,0,step);
}
});
Result: http://imageshack.com/a/img913/9525/NbIqWK.jpg

Resources