Do I need to Have 16 character sprites if I have 4 classes and 4 races? - sprite

So I've started development on a rouge-like platformer. It would be difficult to explain exactly what it's like, but that doesn't matter. What does matter is how many sprites I need.
I have 4 classes and 4 races. (As of now, may add more later) Thief, Warrior, Wizard, and Archer. They all have different suits. As for race; Human, Elven, Reptillian, and Dwarven. Since the player can choose their race and class, do I need to make a sprite of every movement option for every combination of these? That would equal up to 16 different sprites. But since I'm adding movement, jumping, attacking, etc... Ugh I'm getting a headache just thinking about it, Help please?

you can do this with skeletal animation sprites.
You need to have a body sprite for each class and each race, but you can have a bone, and split the bone from bodies. Then you animate the bone, and allocate the proper body to it.
means that you design a run animation for bone, then when you need to play animation for any class and race, just apply the bone to specific sprites.
see this.

You will have to do a lot of drawing anyway, but what you can do is separate the classes and the races from the different movements. For exemple, draw a body without any outfit and any race specific attribut (assuming the dwarf and the elven have the same body size... ). Draw this body in all the positions you want. Then draw sprites with the only the heads of every characters, and then the outfits without anything int them.
Then, for each character, in the draw event, draw the standard body, then the character specific head, then the outfit.
The trick is to draw your body animations in such a way that the outfit and the head stay approximately the same during the movements, and the body does all the moving.
But in the end, all depends on the style you want. I would recommend my solution if you do some pixel art. Otherwise, if your characters are very detailed, you may want to use the skeletal animations as Ali Bahrami suggested.

i recommend creating a basic sprite for each race and then copying it and just adding the appropriate clothing / armour for each class.

Related

Godot - Game freezes when Area2D Monitoring turned on

So when my player falls off the map, I want the level to reload. I have used an area2d with a collisionshape2d to create an area that will call a function when the player collides with this area. However, when the game is run with this code included, the player will animate through a few frames then the game completely freezes before I can even move the player.
func _on_Area2D_body_entered(body):
get_tree().reload_current_scene()
If I delete this code, or set monitoring to off, and re-run the game it will not freeze.
Below is a screenshot of my level design.
Level design
Any help would be greatly appreciated :) - Is this a bug or am I doing something stupid?
When I set a breakpoint on the get_tree().reload_current_scene() line the following report shows
debugger
does this mean the player is colliding with a tile - If this is the case I don't see how as the program freezes before the player touches the ground.
As I said in the comments, this line:
get_tree().reload_current_scene()
Returns a value.
Now, you have said that 0 is "continuously outputted". In this context 0 means OK, in other words: it was able to reload the scene. The problem is the "continuously" part. It means that the scene reloads and then this code is triggered, and then it reloads again, and then this code is triggered again, and so on.
Now, apparently the Area2D is colliding with the TileMap. That makes sense. If it is a collision between the Area2D and a tile upon loading the scene, you would get the observed behavior. And the way the Area2D and TileMap are positioned in the scene supports the idea.
And about fixing it. I'll give you three solutions, either of these will work, with their drawbacks and caveats:
Don't have the Area2D positioned in a way that intersects non-passable tiles. This is easy to do by moving the Area2D further down, or by removing any tiles that overlap it.
The drawback with this approach is that it is fragile. You may forget in the future and move the Area2D or add tiles or something else that make the problem return. Also, it might not work well with your intended scenario design.
Change the collision_mask and collision_layer in such way that the tiles and the Area2D do not collide. As long as the bits from the mask do not overlap the bits from the layer of the other and viceversa, Godot will not even check for a collision between them.
The main drawback with this approach is that you have limited number of layers.
There is also the fact that it is less intuitive that simply placing things in such a way they don't collide.
To make it easier to work with, assign layers to different kinds of things… Go to your Project Settings, on the General Tab, under Layer Names, and 2D Physics, and give them names (e.g. "environment", "enemies", "enemy bullets", "player", "player bullets", "items", "others").
Then you can assign to each object on collision_layer what they are, and on collision_mask set every thing they MUST collide with. With the caveat that Godot will check both ways.
In this case you would set the collision_layer of the player character physics object (the KinematicBody2D) to "player" (or similar), and put the collision_mask of the Area2D to the same, so they collide. Have the collision_layer of the TileMap set to something else (e.g. "environment") that is not in the collision_mask of the Area2D, so the Area2D and the TileMap do not collide. And set the collision_mask of the player character to something that include the layer you set to the TileMap, so the player character also collides with it. I hope that makes sense.
And, of course, you can filter on the Area2D, with a little of code. It can be checking the class, or node group, or the name of the physics body. For example you can insert at the start of the method something like this: if body.name != "player": return. So that it exits the method before it reaches reload_current_scene unless it is the correct physics body.
The drawback with this approach is that it is still checking and registering the collision, so it has worse performance that using collision_mask and collision_layer. But it will work, and it will perform OK for a small to mid sized game.
For more complex situations, you may employ a combination of these approaches (because, as I said, there is a limited number of layers, so you need to add filtering on top of it). I have a more detailed explanation of how to setup physics objects, including the techniques mentioned here, in another answer.

How do I retain proper background on a character-based graphics system?

I got feeling retro and decided to write my favorite 8-bit computer game (Williams' Defender) on my first computer (Commodore PET 4032). All the code is being done in 6502 Assembly language. For those not familiar with the PET, all the graphics are character-based and to build games you move different characters around a 40 column x 25-row screen. This is very old tech - there are no sprites, no graphics layers, no ability to AND at the screen level, etc that we would be used to today.
I want the game to have multiple "laser beams" to be fired at the same time, and those laser beams might go on top of one another as they traverse the screen. Right now, as the beams move along the screen they store in memory what was underneath themselves and then replace what was underneath themselves as they move along to restore the background to its original state. The problem comes when a second laser goes on top of the first .. the first moves along and replaces the original background rather than leaving the second laser on top, then that second laser moves along and leaves artifacts of the first behind.
Is there a classic "light" algorithm or rule-set that allows multiple objects to move across one another such that the original proper things underneath are retained? I've tried different approaches (swapping backgrounds as things traverse, etc) but nothing seems to give me the correct result that I want.
It's certain an option to have each sprite keep a copy of whatever it overwrote, and have them erase themselves in the opposite order to that in which you drew them. That can't fail, but it assumes you have time for a full sprite draw and erase each frame.
You can also use a screen-sized buffer of 'is background' and 'is sprite' flags. Each time a sprite is drawn, mark its character locations as 'is sprite'. To erase all sprites iterate through the screen-sized buffer repainting the background anywhere that isn't marked 'is background'. You can keep upper and lower bounds of updated positions if iterating the whole 2000 potential slots is too great a cost.
You can also compare the differences between two such buffers to reduce flicker substantially supposing you have only one video buffer: paint the new sprites first, wherever they should go, noting them in the new buffer. Once all sprites are drawn, fill in background anywhere that the new buffer isn't marked 'is sprite' but the old is.
I would suggest:
Maintain a model of the game state that would allow you to redraw the entire screen at any time. This would include the positions and other state of all movable elements
As you update the game state between frames, accumulate a mask of all the cells that will need to be redrawn, because something in them changed or moved.
Iterate through the game elements in depth order from top to bottom, redrawing the parts of each element that are in the changed-cell mask
Remove any cell you draw from the changed-cell mask so it won't be rewritten by deeper elements.
The background will be last, and will redraw all remaining cells, leaving you with an empty mask ready for the next frame.
This procedure avoids any flicker that would be caused by undrawing anything you draw in a single frame.
Various indexing structures can be added to the changed cell mask to avoid unnecessary drawing work. What kinds of optimizations are appropriate here depend on your game. If the background is mostly static, for example, then it would be useful to add the coordinates of each changed cell to a list during the update, and then only check those cells during the background redraw. Or you could do this based on the previous positions of all movable elements... up 2 you.
If the majority of the scene changes in every frame, then you can skip the mask accumulation and just start with a full screen mask... although I think a PET might not be fast enough for such games.
Having never programmed for the Pet, I can't offer any specific advice about what you might try, but I can recommend keeping a copy of the current on-screen background in about 1k of RAM. That way, you can use that data to restore your background when removing the last "sprite" written to that tile. Unfortunately, that also requires you to keep your code and object data combined under 31k, unless you are programming this as a cartridge. Just a few thoughts, for what they're worth.

OpenSceneGraph: Don't update the z-buffer when drawing semi-transparent objects

Question
Is it possible to tell OpenSceneGraph to use the Z-buffer but not update it when drawing semi-transparent objects?
Motivation
When drawing semitransparent objects, the order in which they are drawn is important, as surfaces that should be visible might be occluded if they are drawn in the wrong order. In some cases, OpenSceneGraph's own intuition about the order in which the objects should be drawn fails—semitransparent surfaces become occluded by other semitransparent surfaces, and "popping" (if that word can be used in this way) may occur, when OSG thinks the order of the object centers' distance to the camera has changed and decides to change the render order. It therefore becomes necessary to manually control the render order of semitransparent objects by manually specifying the render bin for each object using the setRenderBinDetails method on the state set.
However, this might still not always work either, as it in the general case, is impossible to choose a render order for the objects (even if the individual triangles in the scene were ordered) such that all fragments are drawn correctly (see e.g. the painter's problem), and one might still get occlusion. An alternative is to use depth peeling or some other order-independent transparency method but, frankly, I don't know how difiicult this is to implement in OpenSceneGraph or how much it would slow the application down.
In my case, as a trade-off between aestetics and algorithmic complexity and speed, I would ideally always want to draw a fragment of a semi-transparent surface, even though another fragment of another semi-transparent surface that (in that pixel) is closer to the camera has already been drawn. This would prevent both popping and occlusion of semi-transparent surfaces by other semi-transparent surfaces, and would effectivelly be achieved if—for every semi-transparent object that was rendered—the Z-buffer was used to test visibility but wasn't updated when the fragment was drawn.
You're totally on the right track.
Yes, it's possible to leave Z-test enabled but turn off Z-writes with setWriteMask() during drawing:
// Disable Z-writes
osg::ref_ptr<osg::Depth> depth = new osg::Depth;
depth->setWriteMask(false);
myNode->getOrCreateStateSet()->setAttributeAndModes(depth, osg::StateAttribute::ON)
// Enable Z-test (needs to be done after Z-writes are disabled, since the latter
// also seems to disable the Z-test)
myNode->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
https://www.mail-archive.com/osg-users#openscenegraph.net/msg01119.html
http://public.vrac.iastate.edu/vancegroup/docs/OpenSceneGraphReferenceDocs-2.8/a00206.html#a2cef930c042c5d8cda32803e5e832dae
You may wish to check out the osgTransparencyTool nodekit we wrote for a CAD client a few years ago: https://github.com/XenonofArcticus/OSG-Transparency-Tool
It includes several transparency methods that you can test with your scenes and examine the source implementation of, including an Order Independent Transparency Depth Peeling implementation and a Delayed Blend method inspired by Open Inventor. Delayed Blend is a high performance single pass unsorted approximation that probably checks all the boxes you want if absolute transparency accuracy is not the most important criteria.
Here's a paper discussing the various approaches in excruciating detail, if you haven't read it:
http://lips.informatik.uni-leipzig.de/files/bathesis_cbluemel_digital_0.pdf

Modifying a model and texture mid-game code

Just have a question for anyone out there who knows some sort of game engine pretty well. What I am trying to implement is some sort of script or code that will allow me to make a custom game character and textures mid-game. A few examples would be along the lines of changing facial expressions and body part positions in the game SecondLife. I don't really need a particular language, feel free to use your favorite, I'm just really looking for an example on how to go about this.
Also I was wondering if there is anyway to combine textures for optimization; for example if i wanted to add a tattoo to a character midgame, is there any code that could combine his body texture and the tattoo texture into one texture to use (this way I can simply just render one texture per body.)
Any tips would be appreciated, sorry if the question is a wee bit to vauge.
I think that "swappable tattoos" are typically done as a second render pass of the polygons. You could do some research into "detail maps" and see if they provide what you're looking for.
As for actually modifying the texture data at runtime, all you need to do is composite the textures into a new one. You could even use the rendering API to do it for you, more than likely; render the textures you want to combine in the order you want to combine them into a new texture. Mind, doing this every frame would be a disoptimization since it'll be slower to render two textures into one and then draw the new one than it would be just to draw the two sources one after the other.

Modelling an I-Section in a 3D Graphics Library

I am using Direct3D to display a number of I-sections used in steel construction. There could be hundreds of instances of these I-sections all over my scene.
I could do this two ways:
Using method A, I have fewer surfaces. However, with backface culling turned on, the surfaces will be visible from only one side. If backface culling is turned off, then the flanges (horizontal plates) and web (vertical plate) may be rendered in the wrong order.
Method B seems correct (and I could keep backface culling turned on), but in my model the thickness of plates in the I-section is of no importance and I would like to avoid having to create a separate triangle strip for each side of the plates.
Is there a better solution? Is there a way to switch off backface culling for only certain calls of DrawIndexedPrimitives? I would also like a platform-neutral answer to this, if there is one.
First off, backface culling doesn't have anything to do with the order in which objects are rendered. Other than that, I'd go for approach B for no particular reason other than that it'll probably look better. Also this object probably isn't more than a hand full of triangles; having hundreds in a scene shouldn't be an issue. If it is, try looking into hardware instancing.
In OpenGL you can switch of backface culling for each triangle you draw:
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
// or
glCullFace(GL_BACK);
I think something similar is also possible in Direct3D
If your I-sections don't change that often, load all the sections into one big vertex/index buffer and draw them with a single call. That's the most performant way to draw things, and the graphic card will do a fast job even if you push half a million triangle to it.
Yes, this requires that you duplicate the vertex data for all sections, but that's how D3D9 is intended to be used.
I would go with A as the distance you would be seeing the B from would be a waste of processing power to draw all those degenerate triangles.
Also I would simply fire them at a z-buffer and allow that to sort it all out.
If it get's too slow then I would start looking at optimizing, but even consumer graphics cards can draw millions of polygons per second.

Resources