resources for making a 2d sprite? [closed] - sprite

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am making a 2d game, can you post link- tutorials for making a 2d game sprites?, and tutorial for browser game development?
I will be really helpfull
Thanks to all

Here's an article with quite a few details
This site also has some sprite-related resources, and the forums have some guides from a number of experienced people.
If you are wanting to learn about making 2D sprites, the best advice I can give is to learn from the hard work of others. Find a game with sprites that you can edit, and start by modifying the existing sprites (a simple recolor is an easy starting point). Then you can move on to larger sprite modifications (shape, size, etc), "swapping" sprites between games, creating a simple game and using sprites that you "borrowed" from an existing game, etc.

I've been thinking about this problem recently.
In the old days, sprites were hand-drawn pixel by pixel. This works well for flat 2D games (side-scrollers, cartoon adventure games, Z-axis top-down, and such), particularly if they are in the 320x200 resolution. Some examples of gorgeous hand-drawn sprite games are the Sierra and Lucas Arts adventure games, Disney's jump&runs, Capcom's fighter games, the Tyrian/Raptor-style top-down scrollers, and the early RTS games (C&C, WC1).
Some games, like Prince of Persia and Mortal Combat, used sprites from animated actors. That produced fluid motion, but looked 'flat'.
Between the mid-90s and the early-00s, character/item sprite-drawing was done by taking stills of 3D objects. Practically every 2D RTS game since around Age of Empires 1 did that. AFAIK Diablo, Baldur's Gate, Divine Divinity, and other such RPG games did the same. This is the reason those games came on so many CDs - they were chock-full of content.
This approach looks great (not flat, but "2.5D") but takes a lot of hard-drive space. Also, whereas you could produce hand-drawn sprites in Paint, the 2.5D ones require 3Ds Max (or equivalent).
One problem that arises with this approach is the combinatorial explosion in costume design (i.e. if you want animate a character in three different coats with three different hats and three different pairs of pants, you need 27 distinct animation). The solution to this, as seen in Diablo II and Baldur's Gate, is rag-dolling - you produce different sprites for every part of the body. This takes a lot of work. Blizzard made their own tools to produce their sprites, but I'm not sure there are sprite rag-dolling tools in the open.
More recently, most games are 3D. Many actually look worse than the old 2.5D ones, because a simple 3D model can animate well in sprites, but poorly in real-time 3D. The difference is that between a glamor shot of a celebrity, taken from a certain distance in certain lighting and then worked-over in photoshop, and the appearance of the same celebrity in real-life (which may not be as glamorous).
I wonder if there are 3D Object -> Sprite programs. I know of one (don't remember the name at the moment), but are there others? At the very least I'm sure there are scripts for Maya and 3ds Max that take shots of an animated 3D object from different angles. Does anyone know more on this?

To make a 2D game sprite:
Open up paint. Paint a picture. Save as a bmp. You now have a 1 frame sprite. You can add meta data to this in code if needed for hotspot, collision info, etc. If you want it to animate, create a bunch of bmps and display them 1 at a time at whatever speed you want to animate them at.
No need for a tutorial link for something like this. Or, you can download any one of thousands of sprite editors that do the above stuff in one place.

Related

Merging little triangles into bigger ones in Kinect scan

I'm not sure if this problem has solution but ill ask anyway ill be glad also for some literature for studying, and some keywords to search.
Lets say I have 3d scan made using kinect.
On the scan there is only a single wall with a door in it. Output from kinect is composed from hundreds of little triangles.
What i want to achieve is that I recognize where is the wall and where is the door and merge wall triangles into lets say few and the same thing with the door triangles.
You want to look at mesh simplification algorithms, but before you do you should probably familiarise yourself with the fundamental concepts of 3D graphics like vertices, matrices and meshes. GDAL and PCL are two libraries that can help you with achieving what you want.
There's also a software called MeshLab, which implements many of the mentioned algorithms and could be of some help to you.

Creating 2D graphics with a 3D tool? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm planning to develop a RTS game with 2D graphics, since it will be a sprites-based game it will require multiple views of every actor (or at least for most of them)
for instance
Now the problem is I kinda sucks as designer, can work a bit with Photoshop but being a 2D software would be really hard to create different views of a single character and make it to look the same.
That's why I was thinking to create models with a 3D tool, then i could get all the renders just rotating them...does this make sense for you guys?
If so, that drives me to a second question: What software could I use?
Again, I'm programmer not designer so I will need to learn from scratch, and 3D studio and Blender look really complex, Google Sketchup seems to be easier but not sure if worth it.
Well that's it, thanks in advance for any feedback.
Creating your actors with a 3D modelling tool and then creating sprites by rendering multiple viewpoints of them is a sound approach. The main thing is to make sure you have a way of scripting the sprite production so that you don't have to tediously generate 100s of sprite images manually!
These days though, I'd have to query why, if you have actors as 3D models, you wouldn't just render them directly in 3D on whatever platform the game is running on. Even the most humble mobile platform has enough graphics power to 3D render any model you're likely to cook up for sprite-sized objects, and a fully 3D approach gives much more flexibility.
Hybrid approaches are also possible; I seem to remember the Total Annihilation RTS games stored 3D models of the game objects, but created and cached 2D spites of them on demand (within the otherwise 2D game engine) rather than relying on the flaky 3D HW of the day or on pregeneration of sprite images for loading. It was a good solution for it's time but I'd be surprised if the approach was still needed today.
It's worth persevering with a package like blender or 3D Studio. The skills you pick up will be useful for other stuff in the future.
If you're dealing with relatively small or low rez graphics like in your example, you don't need to worry about putting too much detail into your model. Just render it out, scale it down and then adjust it in a paint package.

Low level graphics programming and ZBrush

After a while of 3d modelling and enjoying ZBrush's impeccable performance and numerous features I thought it would be great OpenGL practice for me to create something similar, just a small sculpting tool. Sure enough I got it done, I couldn't match ZBrush's performance of course seeing as how a brigade of well payed professionals outmatch a hobbyist. For the moment I just assumed ZBrush was heavily hardware accelerated, imagine my surprise when I found out it's not and furthermore it uses neither opengl or direct3d.
This made me want to learn graphics on a lower level but I have no clue where to start. How are graphics libraries made and how does one access the framebuffer without the use of opengl. How much of a hassle would it be to display just a single pixel without any preexisting tools and what magic gives ZBrush such performance.
I'd appreciate any info on any question and a recommendation for a book that covers any of these topics. I'm already reading Michael Abrash's Graphics Programming Black Book but it's not really addressing these matters or I just haven't reached that point yet.
Thank you in advance.
(Please don't post answers like "just use opengl" or "learn math", this seems to be the reaction everywhere I post this question but these replies are off topic)
ZBrush is godly in terms of performance but I think it's because it was made by image processing experts with assembly expertise (it's also likely due to the sheer amount of assembly code that they've been almost 20 years late in porting to 64-bit). It actually started out without any kind of 3D sculpting and was just a 2.5D "pixol" painter where you could spray pixels around on a canvas with some depth and lighting to the "pixols". It didn't get sculpting until around ZB 1.5 or so. Even then it impressed people with how fast you could spray these 2.5D "pixols" around on the canvas back when a similarly-sized brush just painting flat pixels with Photoshop or Corel Painter would have brought framerates to a stutter. So they were cutting-edge in performance even before they tackled anything 3D and were doing nothing more than spraying pixels on a canvas; that tends to require some elite micro-optimization wizardry.
One of the things to note about ZB when you're sculpting 20 million polygon models with it is that it doesn't even use GPU rasterization. All the rasterization is done in CPU. As a result it doesn't benefit from a beefy video card with lots of VRAM supporting the latest GLSL/HLSL versions; all it needs is something that can plot colored pixels to a screen. This is probably one of the reasons it uses so little memory compared to, say, MudBox, since it doesn't have to triple the memory usage with, say, VBOs (which tend to double system memory usage while also requiring the data to be stored on the GPU).
As for how you get started with this stuff, IMO a good way to get your feet wet is to write your own raytracer. I don't think ZBrush uses, say, scanline rasterization which tends to rise very proportionally in cost the more polygons you have, since they reduce the number of pixels being rendered at times like when you rotate the model. That suggests that whatever technique they're using for rasterization is more dependent in terms of performance by the number of pixels being rendered rather than the number of primitives (vertices/triangles/lines/voxels) being rendered. Raytracing fits those characteristics. Also IMHO a raytracer is actually easier to write than a scanline rasterizer since you don't have to bother with tricky cases so much and elimination of overdrawing comes free of charge.
Once you got a software where the cost of an operation is more in proportion to the number of pixels being rendered than the amount of geometry, then you can throw a boatload of polygons at it as they did all the way back when they demonstrated 20 million polygon sculpting at Siggraph with silky frame rates almost 17 years ago.
However, it's very difficult to get a raytracer to update interactively in response to mesh data that is being not only sculpted interactively, but sometimes having its topology being changed interactively. So chances are that they are using some data structure other than your standard BVH or KD-Tree as popular in raytracing, and instead a data structure which is well-suited for dynamic meshes that are not only deforming but also having their topology being changed. Maybe they can voxelize and revoxelize (or "pixolize" and "repixolize") meshes on the fly really quickly and cast rays directly into the voxelized representation. That would start to make sense given how their technology originally revolved around these 2.5D "pixels" with depth.
Anyway, I'd suggest raytracing for a start even if it's only just getting your feet wet and getting you nowhere close to ZB's performance just yet (it's still a very good start on how to translate 3D geometry and lighting into an attractive 2D image). You can find minimal examples of raytracers on the web written with just a hundred lines of code. Most of the work typically in building a raytracer is performance and also handling a rich diversity of shaders/materials. You don't necessarily need to bother with the latter and ZBrush doesn't so much either (they use these dirt cheap matcaps for modeling). Then you'll likely have to innovate some kind of data structure that's well-suited for mesh changes to start getting on par with ZB and micro-tune the hell out of it. That software is really on a whole different playing field.
I have likewise been so inspired by ZB but haven't followed in their footsteps directly, instead using the GPU rasterizer and OpenGL. One of the reasons I find it difficult to explore doing all this stuff on the CPU as ZB has is because you lose the benefits of so much industrial research and revolutionary techniques that game engines and NVidia and AMD have come up with into lighting models in realtime and so forth that all benefit from GPU-side processing. There's 99% of the 3D industry and then there's ZBrush in its own little corner doing things that no one else is doing and you need a lot of spare time and maybe a lot of balls to abandon the rest of the industry and try to follow in ZB's footsteps. Still I always wish I could find some spare time to explore a pure CPU rasterizing engine like ZB since they still remain unmatched when your goal is to directly interact with ridiculously high-resolution meshes.
The closest I've gotten to ZB performance was sculpting 2 million polygon meshes at over 30 FPS back in the late 90s on an Athlon T-Bird 1.2ghz with 256MB of RAM, and that was after 6 weeks of intense programming and revisiting the drawing board over and over in a very simplistic demo, and that was a very rare time where my company gave me so much R&D time to explore what ZB was doing. Still, ZB was handling 5 times that geometry at the same frame rates even at that time and on the same hardware and using half the memory. I couldn't even get close, though I did end up with a newfound respect and admiration for the programmers at Pixologic. I also had to insist to my company to do the research. Some of the people there thought ZBrush would never become anything noteworthy and would just remain a cutesy artistic application. I thought the opposite since I saw something revolutionary long before it acquired such an epic following.
A lot of people at the time thought ZB's ability to handle so many polygons was impractical and that you could just paint bump/normal/displacement maps and add whatever details you needed into textures. But that's ignoring the workflow side of things. When you can just work straight with epic amounts of geometry, you get to uniformly apply the same tools and workflow to select vertices, polygons, edges, brush over things, etc. It becomes the most straightforward way to create such a detailed and complex model, after which you can bake out the details into bump/normal/displacement maps for use in other engines that would vomit on 20 million polygons. Nowadays I don't think anyone still questions the practicality of ZB.
[...] but it's not really addressing these matters or I just haven't
reached that point yet.
As a caveat, no one has published anything on how to achieve performance rivaling ZB. Otherwise there would be a number of applications rivaling its performance and features when it comes to sculpting, dynamesh, zspheres, etc and it wouldn't be so amazingly special. You definitely need your share of R&D to come up with anything close to it, but I think raytracing is a good start. After that you'll likely need to come up with some really interesting ideas for algorithms and data structures in addition to a lot of micro-tuning.
What I can say with a fair degree of confidence is that:
They have some central data structure to accelerate rasterization that can update extremely quickly in response to changes the user makes to a mesh (including topological ones).
The cost of rasterization is more in proportion to the number of pixels rendered rather than the size of the 3D input.
There's some micro-optimization wizardry in there, including straight up assembly coding (I'm quite certain ZB uses assembly coding since they were originally requiring programmers to have both assembly and C++ knowledge back when they were hiring in the 2000s; I really wanted to work at Pixologic but lacked the prerequisite assembly skills).
Whatever they use is pretty light on memory requirements given that the models are so dynamic. Last time I checked, they use less than 100MB per million polygons even when loading in production models with texture maps. Competing 3D software with the exception of XSI can take over a gigabyte for the same data. XSI uses even less memory than ZB with its gigapoly core but is ill-suited to manipulating such data, slowing down to a crawl (they probably optimized it in a way that's only well-suited for static data like offloading data to disk or even using some expensive forms of compression).
If you're really interested in exploring this, I'd be interested to see what you can come up with. Maybe we can exchange notes. I've devoted much of my career just being interested in figuring out what ZB is doing, or at least coming up with something of my own that can rival what it's doing. For just about everything else I've tackled over the years from raytracing to particle simulations to fluid dynamics and video processing and so forth, I've been able to at least come up with demos that rival or surpass the performance of the competition, but not ZBrush. ZBrush remains that elusive thorn in my side where I just can't figure out how they manage to be so damned efficient at what they do.
If you really want to crawl before you even begin to walk (I think raytracing is a decent enough start, but if you want to start out even more fundamental) then maybe a natural evolution is to first just focus on image processing: filtering images, painting them with brushes, etc, along with some support for basic vector graphics like a miniature Photoshop/Illustrator. Then work your way up to rasterizing some basic 3D primitives, like maybe just a wireframe of a model being rendered using Wu line rasterization and some basic projection functions. Then work your way towards rasterizing filled triangles without any lighting or texturing, at which point I think you'll get closer to ZBrush focusing on raytracing rather than scanline with a depth buffer. However, doing a little bit of the latter might be a useful exercise anyway. Then work on rendering lit triangles, maybe starting with direct lighting and just a single light source, just computing a luminance based on the angle of the normal relative to the light source. Then work towards textured triangles using baycentric coordinates to figure out what texels to render. Then work towards indirect lighting and multiple light sources. That should be plenty of homework for you to develop a fairly comprehensive idea of the fundamentals of rasterization.
Now once you get to raytracing, I'm actually going to recommend one of the least efficient data structures for the job typically: octrees, not BVH or KD-Tree, mainly because I believe octrees are probably closer to allowing what ZB allows. Your bottlenecks in this context don't have to do with rendering the most beautiful images with complex diffuse materials and indirect lighting and subpixel samples for antialiasing. It has to do with handling a boatload of geometry with simple lighting and simple shaders and one sample per pixel which is changing on the fly, including topologically. Octrees seem a little better suited in that case than KD-tree or BVHs as a starting point.
One of the problems with ignoring the fundamentals these days is that a lot of young developers have lost that connection from, say, triangle to pixel on the screen. So if you don't want to take such rasterization and projection for granted, then your initial goal is to project 3D data into a 2D coordinate space and rasterize it.
If you want a book that starts at a low level, with framebuffers and such, try Computer Graphics: Principles and Practice, by Foley, van Dam, et al. It is an older, traditional text, but newer books tend to have a higher-level view. For a more modern text, I can also recommend 3D Computer Graphics by Alan Watt. There are plenty of other good introductory texts available -- these are just two that I am personally familiar with.
Neither of the above books are tied to OpenGL -- if I recall correctly, they include the specific math and algorithms necessary to understand and implement 3D graphics from the bottom up.

How do I create a real-time rendering window from scratch?

I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga

Graphics/Vision Interesting Topics

I would like to do an interesting project for a computer graphics course. I know that there is a lot of literature out there (i.e. SIGGRAPH conference papers). I have a very large range of interest with regard to computer graphics (i.e. image processing, 3D modeling, rendering, animation). However, I've only taken computer vision/graphics for 2 semesters and thus don't have too much background experience, except for the class projects that I had to do.
I've been looking through SIGGRAPH papers trying to see if there is anything that will be of interest to me but the literature is extremely vast. I was wondering if anyone has any topic suggestions, anything interesting that you ran across that you could recommend. I would prefer to do something fun yet slightly challenging (not really interested in making a shooter game).
If this question does not belong here, I apologize and please let me know where I should move it.
Thanks!
Image Drawing animator. (the name is kind of misleading, but I didn't care much about it)
Anyway, the software does the following:
Takes an image say a JPEG or BMP as input.
Extract the lines from the image. (I used Matlab and Laplace transformations)
Convert the static lines to Vector paths.
Simulate drawing the image using the extracted paths.
In summary, you should give an image, for example a city scape, the program extract all lines and start drawing the buildings, streets and sunset lines, then finally add the colors one by one, until the full image is done.
Real time hand(s) detector.
You'll have plenty of interesting and fun applications with this.

Resources