What will happen when we move from one state to other in Phaser i.e. Array, objects, images loaded in the previous state are flushed when we move to the next state?
Thanks in Advance!!
By default, all display objects that you added to the game world (sprites, texts, etc.) are removed when switching to a different state. All of your loaded assets remain in the cache and you can use them in your new state.
You can change this behavior when you call start method on State manager.
game.state.start("nextState", true, true);
The second parameter specifies whether game objects should be cleared.
The third parameter says whether game cache should be cleared.
Related
I am developing a simple DDD + Event sourcing based app for educational purposes.
In order to set event version before storing to event store I should query event store but my gut tells that this is wrong because it causes concurrency issues.
Am I missing something?
There are different answers to that, depending on what use case you are considering.
Generally, the event store is a dumb, domain agnostic appliance. It's superficially similar to a List abstraction -- it stores what you put in it, but it doesn't actually do any work to satisfy your domain constraints.
In use cases where your event stream is just a durable record of things that have happened (meaning: your domain model does not get a veto; recording the event doesn't depend on previously recorded events), then append semantics are fine, and depending on the kind of appliance you are using, you may not need to know what position in the stream you are writing to.
For instance: the API for GetEventStore understands ExpectedVersion.ANY to mean append these events to the end of the stream wherever it happens to be.
In cases where you do care about previous events (the domain model is expected to ensure an invariant based on its previous state), then you need to do something to ensure that you are appending the event to the same history that you have checked. The most common implementations of this communicate the expected position of the write cursor in the stream, so that the appliance can reject attempts to write to the wrong place (which protects you from concurrent modification).
This doesn't necessarily mean that you need to be query the event store to get the position. You are allowed to count the number of events in the stream when you load it, and to remember how many more events you've added, and therefore where the stream "should" be if you are still synchronized with it.
What we're doing here is analogous to a compare-and-swap operation: we get a representation of the original state of the stream, create a new representation, and then compare and swap the reference to the original to point instead to our changes
oldState = stream.get()
newState = domainLogic(oldState)
stream.compareAndSwap(oldState, newState)
But because a stream is a persistent data structure with append only semantics, we can use a simplified API that doesn't require duplicating the existing state.
events = stream.get()
changes = domainLogic(events)
stream.appendAt(count(events), changes)
If the API of your appliance doesn't allow you to specify a write position, then yes - there's the danger of a data race when some other writer changes the position of the stream between your query for the position and your attempt to write. Data obtained in a query is always stale; unless you hold a lock you can't be sure that the data hasn't changed at the source while you are reading your local copy.
I guess you shouldn't to think about event version.
If you talk about the place in the event stream, in general, there's no guaranteed way to determine it at the creation moment, only in processing time or in event-storage.
If it is exactly about event version (see http://cqrs.nu/Faq, How do I version/upgrade my events?), you have it hardcoded in your application. So, I mean next use case:
First, you have an app generating some events. Next, you update app and events are changed (you add some fields or change payload structure) but kept logical meaning. So, now you have old events in your ES, and new events, that differ significantly from old. And to distinguish one from another you use event version, eg 0 and 1.
I built a homebrew data entity repository with a factory that defines retention policy by type (e.g. absolute or sliding expiration). The policy also specifies the cache type as httpcontext request, session, or application. A MemoryCache is maintained by a caching proxy in all 3 cache types. Anyhow, I have a data entity service tied to the repository which does the load and save for our primary data entity. The idea is you use the entity repository and don't need to care if the entity is cached or retrieved from it's data source (db in this case).
An obvious assumption would be that you would need to synchronise the load/save events as you would need to save the cached entity before loading the entity from it's data source.
So I was investigating a data integrity issue in production today... :)
Today I read there can be a good long gap between the entity being removed from the MemoryCache and the CacheItemRemovedCallback event firing (default 20 seconds). The simple lock I had around the load and save data ops was insufficient. Furthermore the CacheItemRemovedCallback was in it's own context outside of HttpContext making things interesting. It meant I needed to make the callback function static as I was potentially assigning a disposed instance to the event.
So once I realised there was was the possibility of a gap whereby my data entity no longer existed in cache but might not have been saved to it's data source might explain the 3 corrupt orders out of 5000. While filling out a long form it would be easy to perform work beyond the policy's 20 minute sliding expiration on the primary data entity. That means if they happen to submit at the same moment of expiration an interesting race condition between the load (via request context) and save (via cache expired callback) emerges.
With a simple lock it was the roll of the dice, would save or load win? Clearly we need a save before the next load from the data source (db). Ideally when an item expires from the cache it is atomically written to it's data source. with the entity gone from the cache but the expired callback not yet fired a load operation can slip in. In this case the entity will not be found in the cache so will default to load from the data source. However, as the save operation may not have commenced resulting in data integrity corruption and will likely clobber your now saved cached data.
To accomplish synchronisation I need a named signalling lock so I settled on EventWaitHandle. A named lock is created per user which is < 5000. This allows the Load to wait on a signal from the expired event which Saves the entity (whose thread exists in its own context outside HttpContext). So in the save it is easy to grab the existing name handle and signal the Load to continue once the Save is complete.
I also have a redundancy where it times out and logs each 10 seconds block by the save operation. As I said, the default is meant to be 20 seconds between an entity being removed form MemoryCache and it being conscious of it to fire the event which in turn saves the entity.
Thank you to anyone who followed my ramblings through all that. Given the nature of the sync requirements was the EventWaitHandle lock the best solution?
For completeness I wanted to post what I did to address the issue. I made multiple changes to the design to create a tidier solution which did not require a named sync object and allowed me to use a simple lock instead.
First the data entity repository is a singleton which was stored in the request cache. This front end of the repository is detached from the cache's themselves. I changed it to reside in the session cache instead which becomes important below.
Second I changed the event for the expired entity to route through the data entity repository above.
Third I changed the MemoryCache event from RemovedCallback to UpdateCallback**.
Last, we tie it all together with a regular lock in the data entity repository which is is the user's session and the gap-less expiry event routing through the same allowing the lock to cover load and save (expire) operations.
** These events are funny in that A) you can't subscribe to both and B) UpdateCallback is called before the item is removed from the cache but it is not called when you explicitly remove the item (aka myCache.Remove(entity) won't call event but UpdateCallback will). We made the decision if the item was being forcefully removed from the cache that we didn't care. This happens when the user changes company or clears their shopping list. So these scenarios won't fire the event so the entity may never be saved to the DB's cache tables. While it might have been nice for debugging purposes it wasn't worth dealing with the limbo state of an entity's existence to use the RemovedCallback which had 100% coverage.
I am trying to update a QTreeWidget every 60 seconds. I have it on a QTimer right now but my concern is that when it updates it will disrupt the users progress(for example if they have a parent opened up so you can see the children, when I update it completely resets the structure). Is there a model or anything I can do to prevent this from hurting their progress?
It is not possible for the view to update and remember the previously expanded items. You could cache the state of your view and reconstruct it with QTreeWidget::expand and QTreeWidget::scrollTo after an update, but the user will still see how the view closes and expands again. Also, it would not be enough to store the currently expanded item's index because that might change after the update. So you'd have to cache some unique identifier and search it in the upated widget afterwards.
What you are trying to do, is quite unusual, since normally you'd only update the widget when the data changes instead of using a fixed interval of time.
Consider using a QTreeView in combination with a QAbstractListModel instead of a QTreeWidget because the latter is designed to hold constant data. Then you can emit dataChanged on the model which automatically updates the QTreeView.
Do I need to kill all sprites and animations before I switch from state A to state B in Phaser, or does Phaser clean them up automatically?
Kamen Minkov's answer is almost right but the argument you need to consider is clearWorld.
If you set it to false, all your objects will remain there when you switch state, a bit like if you had set up both states at the same time. Otherwise, the default behavior is indeed to destroy all of your game objects when switching states.
clearCache is about clearing the preloaded assets (meaning you'll have to preload them again). Most of the time you'll want to leave it to false, unless for instance you've preloaded large amounts of assets for a cinematic and you don't need them anymore ; in that case removing them from the cache is probably a good idea to free up some memory.
There's a clearCache bool parameter for Phaser.StateManager.start() (the third one), so you most probably don't need to do anything manually.
API Docs for StateManager
Kill doesn't really mean clean up as in memory clean up. It has nothing to do with removing a sprite from the game, it is just declaring a sprite to be killed and removing it from view. I believe Phaser actually handles the clean up for you automatically when you switch in between states.
You should free the memory used by your game state on the shutodown method, which is automatically called by Phaser when changing states.
Example:
MyState.prototype.shutdown = function ()
{
this.background.destroy(); //Phaser.Image
this.mySprite.destroy(true); //Phaser.Sprite
this.myImage.destroy(true); //Phaser.Image
this.game.cache.removeImage("image-I-wont-use-anymore", true);
};
I have a background process receiving and applying changes to Core Data entities from a back end server using Restkit which works really well. All the entities have a version number property which updates when the back end accepts changes and publishes a new version. If an entity the user is viewing is changed I need to update the view with the latest version information.
Using KVO to observe version number for the current entity and refreshing the view when it changes works really well as long as version number is the last property.
That is, the 'column order' matters, and property updates are atomic. If the version number is the last property then when the observer is invoked all changes to all entity properties will have been applied.
If version number is not the last property defined, then when the observer is invoked the updated values of the properties after version will not been applied.
The solution is to change the database and ensure that version number is always last. This works however I cannot find anything in the documentation to suggest that the sequence of property changes is guaranteed.
I assume the only way to get a water-tight non-atomic notification is to register for managed object context change notifications and then process those notifications looking for changes to objects of interest. My concern with this is that it is not fined grained and there will be a lot of unnecessary processing to find relatively few things of interest.
Is this correct or is there a way to ensure an non-atomic view of an object when using KVO?
If you wanted to use KVO you would need to layer some change management on top, such as when the managed object is saved you check the version number and change another (non-persistent) attribute that is being observed. You can be sure that everything has been updated in a logical set when the object is saved.
Generally the context save notification is the approved approach. So long as you aren't making thousands of changes or making few large saves to the context it shouldn't be an issue. You can also look at using predicates to filter the changes and / or a fetched results controller (which does the observation for you).