Phaser Loader gets stuck on 99% - phaser-framework

Is there a limit to the number/size of assets I can run in a preload function? I'm finding that often when loading the progress gets stuck at 99% and either takes a few minutes to complete and the completed event is fired or doesn't complete at all.
Is there anyway I can debug this to find out where the process is getting stuck, or is it simply that loading 250MB of game assets will crash the loader from time to time.

I don't know that there's a hard cutoff or limit built into Phaser for the number or size of assets that can be loaded in the preload method. Since it sounds like you're sometimes seeing a delayed-but-successful completion while other times you never reach 100% successful load, it's more likely that you're hitting a timeout or some other load error.
You should be able to catch these errors with the FILE_LOAD_ERROR event:
preload() {
// Preload setup
this.load.on('loaderror', this.onLoadError);
}
onLoadError(file) {
console.log(file);
}
Another option you might explore is modifying the LoaderConfig.
With that being said, 250MB before you do anything seems like a huge lift. You might want to consider breaking up the load across scenes, verifying that your assets are as compressed as possible or doing some lazy loading of assets when needed instead of in the preload. You can see an example of an on-demand asset load here.

Related

Invoke progressively getting slower

I've been investigating performance issues in my app, and it boils down to the time taken to call Invoke progressively getting longer. I am using System.Diagnostics.Stopwatch to time the Invoke call itself, and while it starts off at 20ms, after a few hundred calls it is around 4000ms. Logging shows the time steadily increasing (at first by ~2ms per call, then by ~100ms and more). I have three Invokes, all are exhibiting the same behaviour.
I am loading medical images, and I need to keep my UI responsive while doing so, hence the use of a background worker, where I can load and process images, but once loaded they need to be added to the man UI for the user to see.
The problem didn't present itself until I tried to load a study of over 800 images. Previously my test sets have been ~100 images, ranging in total size from 400MB to 16GB. The problem set is only 2GB in size and takes close to 10 minutes to approach 50%, and the 16GB set loads in ~30s total, thus ruling out total image size as the issue. For reference my development machine has 32GB RAM. I have ensured that it is not the contents of the invoked method by commenting the entire thing out.
What I want to understand is how is it possible for the time taken to invoke to progressively increase? Is this actually a thing? My call stacks are not getting deeper, Number of threads is consistent, what resource is being consumed to cause this? What am I missing!?
public void UpdateThumbnailInfo(Thumbnail thumb, ThumbnailInfo info)
{
if (InvokeRequired)
{
var sw = new Stopwatch();
sw.Start();
Invoke((Action<Thumbnail, ThumbnailInfo>) UpdateThumbnailInfo, thumb, info);
Log.Debug("Update Thumbnail Info Timer: {Time} ms - {File}", (int) sw.ElapsedMilliseconds, info.Filename);
}
else
{
// Do stuff here
}
}
Looks like you are calling UpdateThumbnailInfo from a different thread. If so, then this is the expected behavior. What is happening is you are queuing hundreds of tasks on the UI thread. For every loaded image the UI needs to do a lot of things, so as the number of images increases, the overall operations grow slow.
A few things that you can do:
* Use BeginInvoke in place of Invoke. As your function is void type, you will not need EndInvoke
* Use SuspendLayout and ResumeLayout to prevent UI from incrementally updating, and rather update everything once when all images are loaded.

Profiling heapdumps in Chrome Developer Tools (memory leak)

I'm having bit trouble with a NodeJS/Express/React application that is on production as we speak.
The problem is, that it keeps climbing up on memory usage and it just doesn't stop. It is slow and steady, and eventually Node crashes. I have several heapdumps that I have been creating with the help of node-heapdump, however, I don't know how to properly identify the leak.
I will share an image of my snapshot. Please note that I sorted by shallow size so supposedly one of those objects/types that appear on top must be the problem:
As I can see below, there is this "Promis in #585" that I see in many places and that could be the one, but I'm unable to identify that line, function or component.
Anybody could help? I can share more screenshots if you want.
Thanks.
I found the problem.
I'm using React Body Classname in my app so when we load different routes we can change the body class from client side. This npm module needs to be used with the Rewind() funcion when you do server side rendering in order to avoid memory leaks:
This is the module I'm talking about:
https://github.com/iest/react-body-classname
And, in order to avoid the memory leak, we are calling:
BodyClassName.rewind()
In the render function of our main App.js container component. This way, doesn't matter what url a user is landing on, Rewind() will always be called and so the data that can be garbage collected will be properly freed in the future.
Now our app stays at a nice and steady 120mb of memory usage.
Thanks anyway :D

Why does text render slower than images with server-side rendering?

I have many examples where text renders slower than an image which almost feels instant. I am doing this via reactjs and server-side rendering with nodejs. For example this gif: http://recordit.co/waMa5ocwdd
Shows you that the header image loads instantly, the CSS is already loaded as the colors are there and present. But, for some reason, the text takes almost a half second to appear. How can I fix or optimize this?
Thanks!
Ok, so debugging this kind of stuff it's useful to hit YSlow for the latest tips, etc.
In general, though, it's good to remember that browsers will make separate requests for each item in your page (i.e. everything with a URL; images, css, etc) and that most of them have some kind of cap on concurrent downloads (4 seems common, but it varies and changes a lot). So while 12 requests isn't a lot, it's still time. As is the time to parse and load your JS, etc. And parsing and loading JS is more time that will happen (and typically, in most browsers, will pause further downloads until it's done)
Without spending a ton of time, I'm guessing that your HTML loads, that calls in the header image, and then the browser starts hitting all the JS and react framework code and it takes a second or two to figure out what to render next.
Again, YSlow has a lot of advice on how to optimize those things, but that's my 2c.
EDIT: adding detail in response to your first question.
As i mentioned above, the JS is only part of the problem. The sum total of render time will include the total time it takes to download and parse everything (including CSS, etc). As an example, looking at it in Chrome debug tools it looks like it takes something around 300ms for the html to download and be parsed enough for the next resources to get called in. In my browser the next two are your main css and logo.png. At around 800ms is when your logo is done downloading, and it's rendered almost immediately. At around the time the html is done downloading, the first js script starts downloading (I don't think turning JS off stops that from happening, though it probably stops the parsing from happening; I've never tested it). At somewhere around 700ms is where you start pulling down the font sets you're using and it looks like they finish downloading around 1second. Your first text shows up about 200ms after that, so I'm guessing that pulling and parsing the font files is what the holdup is (when compounded with them queuing behind other resources).

BreezeJS RequireJS optimized for production

Using BreezeJS, RequireJS, AngularJS with NodeJS and MongoDB as backend, I'm building a fat client application, with great success so far, as BreezeJS takes away the work to keep my domain model persisted. But it's growing and it takes now over five seconds to load all the files if they are not cached on localhost, catastrophical if you are trying to do a quick demo using a remote server..
R optimizer Warning:
bower_components/breezejs/breeze.debug.js has more than one anonymous define.
May be a built file from another build system like, Ender. Skipping
normalization.
Trying to run the compiled production file throws:
Uncaught Error: Mismatched anonymous define() module: function (){ return definition(global); }
(breeze.debug.js L10)
Has anyone gotten BreezeJS+RequireJS into production?
Take a look at the Todo-Require sample in the breeze.samples.js GitHub repo.
The Todo-KO-Require sample shows you how to code with require but it doesn't show you how to package things for production. You will suffer if you're asking require to download every individual file on demand.
You need to optimize with bundling and minification ... a topic outside of the breeze purview and not something we are in a hurry to produce. Perhaps you'd like to take that bull by the horns and share with the rest of us.
Why worry?
[update, 2 July 2014]
Let's take a step back and rediscover the point of all this. What is require doing for you?
I've used it with KO as a vehicle for dependency injection. That's its role in Durandal.
Angular comes with its own DI which reduces the role of require in an Ng app to asynchronous file loader. That's usually "meh" for me, in part because one soon encounters the file-loading-flurry that you describe. That leads to bundling which is a headache and can as easily be done with other tooling.
I see the value in a large applications with dynamically loaded modules. But Ng is woeful in this regard quite apart from the async file loading. Something they'll address in v.2.
I'm happy to leave you to a contrary opinion. So let's consider what would happen if we can't fix this problem. What if breeze cannot be optimized with r?
My instinct is that it isn't really optimal to bundle breeze with anything else anyway!
The minimized breeze is rather large in itself. It is not evident to me that you would gain any performance advantage at all by bundling it with your application assets. Sure you want to keep the number of server requests down. But are two requests with 1/2 the payload slower than one big request? Do you know for your target environment?
I'm not the kind of pedant who insists that every script be delivered by require. It's trivial to load BreezeJS separately with a script tag and then make it available to other require-aware modules (I shall assume you know how to do this). What would be horrible about that?
While we look forward to your repro sample (see my comment below), I may have difficulty justifying priority attention to this issue. Convince me otherwise.
I managed to compile my projecte leaving out breeze. With a small adjustment to the breeze mongo dataservice file header. Using r optimizer config
paths: {
'breeze': 'empty:',
'breeze-dataservice-mongo': 'empty:'
}
Breeze mongodataservice can be included as soon as it conforms like lib/breeze-angular.
(function () {
"use strict";
requirejs.config({
paths: {
'breeze': 'bower_components/breezejs/breeze.debug',
'breeze-dataservice-mongo': 'lib/breeze.dataService.mongo'
}
});
require(['angular', 'jquery', 'core/logger', 'fastclick', 'core/index', 'domready!'], function (angular, $, logger, fastClick) {
logger.info('iaGastro client is booting');
fastClick.attach(document.body);
angular.bootstrap(document, ['iaApp']);
});
})();
Leaving out SaveQueueing completely, I think I can find a different solution for my concurrent save error..
#Ward:
RequireJS does static file loading, like my domain classes, also templates and json files. Now it also concatenates all my files and minifies them with one more parameter. It's probably the docs, which are not the best, because I feel I'm not the only one sometimes misunderstanding RequireJS..
Also it's error messages can be frustrating (circular dependencies..).

Memory leak in a node.js crawler application

For over a month I'm struggling with a very annoying memory leak issue and I have no clue how to solve it.
I'm writing a general purpose web crawler based on: http, async, cheerio and nano. From the very beginning I've been struggling with memory leak which was very difficult to isolate.
I know it's possible to do a heapdump and analyse it with Google Chrome but I can't understand the output. It's usually a bunch of meaningless strings and objects leading to some anonymous functions telling me exactly nothing (it might be lack of experience on my side).
Eventually I came to a conclusion that the library I had been using at the time (jQuery) had issues and I replaced it with Cheerio. I had an impression that Cheerio solved the problem but now I'm sure it only made it less dramatic.
You can find my code at: https://github.com/lukaszkujawa/node-web-crawler. I understand it might be lots of code to analyse but perhaps I'm doing something stupid which can be obvious strait away. I'm suspecting the main agent class which does HTTP requests https://github.com/lukaszkujawa/node-web-crawler/blob/master/webcrawler/agent.js from multiple "threads" (with async.queue).
If you would like to run the code it requires CouchDB and after npm install do:
$ node crawler.js -c conf.example.json
I know that Node doesn't go crazy with garbage collection but after 10min of heavy crawling used memory can go easily over 1GB.
(tested with v0.10.21 and v0.10.22)
For what it's worth, Node's memory usage will grow and grow even if your actual used memory isn't very large. This is for optimization on behalf of the V8 engine. To see your real memory usage (to determine if there is actually a memory leak) consider dropping this code (or something like it) into your application:
setInterval(function () {
if (typeof gc === 'function') {
gc();
}
applog.debug('Memory Usage', process.memoryUsage());
}, 60000);
Run node --expose-gc yourApp.js. Every minute there will be a log line indicating real memory usage immediately after a forced garbage collection. I've found that watching the output of this over time is a good way to determine if there is a leak.
If you do find a leak, the best way I've found to debug it is to eliminate large sections of your code at a time. If the leak goes away, put it back and eliminate a smaller section of it. Use this method to narrow it down to where the problem is occurring. Closures are a common source, but also check for anywhere else references may not be cleaned up. Many network applications will attach handlers for sockets that aren't immediately destroyed.

Resources