FlxTilemap is a very handy implementation of a tilemap in the HaxeFlixel library. Currently I have working code taking maps generated with the Ogmo map editor and loaded with FlxOgmoLoader (also from the HaxeFlixel library) into a FlxTilemap. I would like to have a world composed of multiple tile maps that are seamlessly displayed as the player moves.
It seems this is not supported by the library. Could someone provide ideas or references on how to implement this efficiently?
While it's not perfect, you could design your tilemaps in a way they connect with each other, and keep loading them (filtering as you need) while the player moves, like
if (player.x > currentTilemap.width) {
tilemapGroup.add(new FlxTilemap(currentTilemap.x + currentTilemap.width, currentTilemap.y);
}
Also, to keep it from running out of memory, use isOnScreen() to make the tilemaps invisible and deactived if they aren't on camera.
Related
I developed simple game using C#, Windows Forms and graphics library SFML.
Fire Tank Game
The game goal is to develop algorithm that performs by tanks (moving, shooting, etc) to extinguish the fire on the map.
This game can draw some kind of simple graphics, works with XML (to draw map, that opens from .xml file), has simple GUI (menu, combobox, datagridview, buttons, labels), uses API (Entity Framework) to query some data from small DB in SQL Server.
Now I would like try to turn it into web. So I need help with what languages (python, php, javascript?) and libraries I should use to develop the same game in web. In total necessary features:
Draw graphics.
Draw GUI.
Work with DB (Also, I would like to move data into some cloud DB, e.g. Azure).
Work with XML.
Also, I implemented simple "sign in/up" logic. Is there any library that registers user and adds him to the DB? So, fifth featureis:
Sign in/up feature.
Last, game uses inconvenient way to develop algorithms (users need to choose items of algorithm from combobox). Is there some library that allows to do it with "drag & drop" way?
Drag & drop items to the table (datagridview).
Aside of Unity, as suggested, I can recommend you a Javascript library called p5.
I worked with both, sfml and p5 and, saving the distances, they are pretty similar.
You can draw 2D stuff (circles, lines, rectangles, etc) with it.
It also contains GUI components to work with (buttons, sliders, etc).
If you want a web environment, you surely will have/need a server side, probably based on PHP to access database. Alternatively, you can use a JS database library, if your system is simple enough.
You can read XML files with JS but you would probably prefer to use JSON files.
Maybe this can cover your basic needs.
There are two good options:
A: Convert it: https://bridge.net/
B: Unity, as 3Dave stated
I'm looking for an example of multi-threading implementation inside the game toolkit? I have the MultiCube example, but that is for WinForms and I use WPF, and I can't use the game toolkit tools from Direc3D11 because I need an instance of the GraphicsDevice. The MultiCube example is not displaying anything but a black screen, I tried it on several computers. My video card doesn't support command lists, don't know if that has anything to do with it. I was wondering how many models can SharpDX handle, because I have to draw hundreds of small
scaffold couplers, and after adding about a 100 on the default GraphicsDevice, the application slows down and gets locked. Any help would be appreciated.
Regards,
Haris
I was looking for the same thing but I couldn't find any examples. I tried converting the MultiCube example to use the toolkit and got it basically working, still very messy at the moment and needs optimizing, but at least it renders.
https://github.com/PlehXP/SharpDX-Samples/tree/MultiCubeToolkit/Toolkit/WindowsDesktop/MultiCube
I am looking for a way to have an animated character in my game. I would like to have a running animation loop when he is running, a jump animation when he jumps etc.
Does the awe6 framework provide anything like this? Maybe using spritesheets, or separate images for each frame.
If I have to use my own system, are there any popular libraries that can help do this, and that work well with awe6? And how would I use it with the framework?
awe6 is not something that works out of the box but rather an architecture for you to build on. It is not designed to fill your requirements without work from you and requires you to have at least a good understanding to integrate another framework/target runtime for what you want. They do have a wiki however you can start with http://code.google.com/p/awe6/wiki/QuickStart
There are many popular frameworks out there that have a lower barrier to entry, such as haxepunk.com, haxeflixel.com, https://github.com/aduros/flambe they all have their own strengths/weaknesses etc for you to decide on and can all easily complete your requests.
Using the Google Earth plugin AI, I want to play a tour authored in KML with the touring capability, but let the user modify the camera controls during the play.
Is it possible?
It depends on how much modification you want to allow.
Tour playback is designed to work with the user changing the orientation of the view (via dragging or the camera controls), but not the position. If the user stops changing the view for long enough, the camera will smoothly snap back to the default orientation for that point in the tour. The zoom and panning controls disappear during the tour, but if the user tries to change the camera position via other methods (like the keyboard), the tour will typically be paused.
The Earth API, however, allows you to absorb or change any of those event behaviors, since you can add a listener for mouse and keyboard events and prevent them from processing as usual or act on them in a completely different way.
If you haven't tried it, there's a tour example in the Google Code Playground where you can see what happens with different interactions based on the default event responses.
Finally, if you want really custom tour behavior -- like allowing certain kinds of movement of the camera away from the tour path even as the tour continues -- you will most likely need to write your own camera movement code. Getting the basics of this working isn't too difficult, but getting the right intuitive feel for that kind of interaction is difficult, and probably dataset-dependent. To get started, you can parse the KML directly, find the tour and the tour primitives it contains, and then use the regular camera controls you cited to move between those primitives, adding offsets for any user-supplied movements.
edit: the Earth API tour page cited in the question has an example of getting started with parsing the KML file by getting the plugin to do it for you. You can use this to implement the above suggestion by using the KML DOM walking code to find all the tour primitives (instead of halting as soon as a Tour element is found).
This isn't always the most efficient approach (plugin function calls have overhead, and meanwhile browsers have built-in XML parsing capabilities), but it may be the most straightforward way to start. For many tours, this approach would be perfectly sufficient.
It is possible, but pretty hard to implement and even harder to control well. I have been playing around with trying to do this for quite a while now. I have not had much success myself, but here are two example by others who have made some progress.
Firstly, the underlying principle they are using is based upon the TICK - a simple example of it is here
http://earth-api-samples.googlecode.com/svn/trunk/examples/event-frameend.html
The two example are :
http://maps.myosotissp.com/
and
http://racemyrace.com/race.php
Also, here is an example that used to work up until recently, I am not sure why it has stopped but it appears you can still read the JS being used. It is made by the same person who created the racemyrace website
http://www.thekmz.co.uk/GEPlugin/pathtour/v3/path_tour_v3.htm
If you happen to work something out, I would appreciate you creating a simple example page and sharing the link. It will probably take a while so if you could look up my email via profile and notify me that would be even better.
Good Luck!
I have a noobish question for any graphics programmer.
I am confused how some games (like Crysis) can support both DirectX 9 (in XP) and 10 (in Vista)?
What I understand so far is that if you write a DX10 app, then it can only runs in Vista.
Maybe they have 2 code bases -- one written in DX9 and another in DX10? But isn't that an overkill?
They have two rendering pipelines, one using DX9 calls and one using DX10 calls. The APIs are not compatible, though a majority of any game engine can be reused for either. If you want some Open Source examples of how different rendering pipelines are done, look at something like Ogre3d, which supports OpenGL, DX9, and (soon)DX10 rendering.
The rendering layer of games is usually a fairly well isolated/abstracted part of the whole application. As far as the game engine is concerned, each frame you are simply building up a list of conceptual objects (trees, characters, etc.). If the game engine chooses to render a particular object, then it's up to the rendering layer how to actually translate that intent into DX draw calls. A DX10 rendering will generate a different set of draw calls to a DX9 layer, but conceptually they are still performing the same action - 'render this tree'.
Rendering is nicely abstracted because it's rare that you want to get any information back from the rendering layer, once the 'render this tree' action is performed, the game engine will just assume that the rendering looks correct. There is little need to handle different potential results from DX9/DX10 rendering calls because 99.9% of the information is going from the engine to the graphics system, and the 0.1% that comes back likely takes the same form between the two APIs.
The application setup is a little more icky, because you've got to ask the system whether or not DX10 is supported and gracefully fall back on DX9 otherwise, but this is standard fare for application setup (in the same way that the game has to pick a resolution, refresh rate, input device, etc.).
It is likely that they have an abstraction layer and they develop against that. At run-time they instantiate the DX9 or DX10 wrapping concrete engines.
I imagine their abstraction is positioned very close to the DirectX layer and simply provides DX9 with sensible manual implementations of DX10 functions or enhances DX9 logic when running on DX10.