Request for approach recommendation for an object identification - visual-c++

Equipment
Windows 7, OpenCV 2.3.1, Visual Studio C++ 2010 Express, and, eventually, any digital video cameras needed, lenses (?)
Project
I want to build a machine to identify characteristics of the flight of a baseball my son hits to the outfield (length, direction, height, etc) in realtime.
Solution description
I will have two fixed digital video cameras observe the flight of the ball and will analyze those video streams with OpenCV to locate and track the ball.
OpenCV methods
There are three methods I've read about and/or seen to identify a ball:
circle detection from edges
circle detection of blobs in a color range (orange ball and tennis ball examples)
moving circle blob detection by frame differencing (car and people identifying and tracking examples)
I have done the first (cvtColor, GaussianBlur, Canny, HoughCircles) only well enough that I can get it to work with certain color backgrounds. I started the second but before I spend days making it work I realized I don't know what the best approach is. It seems to someone with no image analysis experience - me - that my PC could have difficulty in 1 finding the right edges since the weather and background will change from game to game. 2 could be difficult because there may be several blobs in the foreground (players' white uniforms, bases) and the background (white lettering or background on signs) that would also be baseball white, and because the baseball white would change as the sun went down or the ball got dirty. I think 3 is the best way to go but don't want to spend a lot of time making it work (my early attempts failed) if I'd just learn it's shortcomings for tracking a baseball after I had it functioning.
The question
Which of 1-3 or 4, 5, 6 (I'm sure there are other methods you know of that I don't) is the most appropriate approach in OpenCV to learn characteristics about the 3D flight (distance, height, direction, etc.) of a baseball hit to the outfield?
(I'm expecting to need to write the code myself but I wouldn't turn down portions of the program that are sent to me.)
Thanks for any advice.

Related

Andengine: Box2D extension vs Sprites for games

My question revolves around which technique's the best when implementing a game with AndEngine. In this game a user is able to do simple touch movements: drag, pinch and tap on certain objects which are represented by just sprites at the moment. At this time, the game doesn't really use any physics. The only thing that's being done regarding the sprites is pixel perfect collision detection (between sprites) and adding and removing these Sprites at real time. Just using Sprites gets the job done, but I was wondering if using the Box2D extension would be a better fit?
Which are the positive and negative points regarding using Box2D extension vs just the Sprites? And does one outweigh the other?
The box2D extension is obviously used for physics simulation in the game. Sprites are used for showing images on screen. So they are different concepts. But if you want to have more reliable collision detection you may need to use box2d for that, even if you do not use physics in your game. Sprite collision detection is not pixel perfect in andEngine.
So in regard to use of box2D:
'+ more reliable collsion detection
'+ you can use bodies as solid bodyies and bounce them automaticlay (calculated by box2D) or as sensors (they do not bounce but report colision (or overlapping) with another body/sensor
'- need to code more and implement bodies for allsprites involved in collisions
'- coding for collision detection of bodies (if you want to use them as sensors, not to bounce them) is a bit harder, but not too hard

Tracking the top of heads with Kinect

I was wondering if there was an existing API for tracking the top of people heads with the Kinect. e.g., the Kinect is facing downwards from a ceiling.
If not, how might I implement such a thing with its depth data.
No. The Kinect expects to be facing a standing (or seated, given the appropriate flag) human. All APIs (official or 3rd party) that have a notion of skeleton tracking expect this.
If you wish you track someone from above, you will need to use a library such as OpenCV (or EmguCV, for C# development). Well, you don't have to, but they offer utilities to help with computer vision and image processing. These libraries don't care if you are using a Kinect or just a regular RGB camera.
Using the Kinect from above, you could use the depth data to help locate and track blobs. With the Kinect at a known distance from the floor, have a few people walk under it and see what z-coordinates you get out of it -- you can then assume that anything within a certain z-coordinate range is a person walking across the screen (vs. a cat, or something else).
You will need to use standard image processing techniques (see OpenCV reference above) to initially find the blobs within the image. Once found, the depth data from the Kinect might be useful but I think you'll find it isn't ultimately necessary if you're just watching people walk across the floor.
We built a Kinect-driven experience where the sensors had to point downward to detect users walking along a wall. We used openTSPS to do all the work of taking the camera input and doing blob detection and handing off tracked "persons" to (in our case) a Processing app. It works really well for us.
http://opentsps.com/

resources for making a 2d sprite? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am making a 2d game, can you post link- tutorials for making a 2d game sprites?, and tutorial for browser game development?
I will be really helpfull
Thanks to all
Here's an article with quite a few details
This site also has some sprite-related resources, and the forums have some guides from a number of experienced people.
If you are wanting to learn about making 2D sprites, the best advice I can give is to learn from the hard work of others. Find a game with sprites that you can edit, and start by modifying the existing sprites (a simple recolor is an easy starting point). Then you can move on to larger sprite modifications (shape, size, etc), "swapping" sprites between games, creating a simple game and using sprites that you "borrowed" from an existing game, etc.
I've been thinking about this problem recently.
In the old days, sprites were hand-drawn pixel by pixel. This works well for flat 2D games (side-scrollers, cartoon adventure games, Z-axis top-down, and such), particularly if they are in the 320x200 resolution. Some examples of gorgeous hand-drawn sprite games are the Sierra and Lucas Arts adventure games, Disney's jump&runs, Capcom's fighter games, the Tyrian/Raptor-style top-down scrollers, and the early RTS games (C&C, WC1).
Some games, like Prince of Persia and Mortal Combat, used sprites from animated actors. That produced fluid motion, but looked 'flat'.
Between the mid-90s and the early-00s, character/item sprite-drawing was done by taking stills of 3D objects. Practically every 2D RTS game since around Age of Empires 1 did that. AFAIK Diablo, Baldur's Gate, Divine Divinity, and other such RPG games did the same. This is the reason those games came on so many CDs - they were chock-full of content.
This approach looks great (not flat, but "2.5D") but takes a lot of hard-drive space. Also, whereas you could produce hand-drawn sprites in Paint, the 2.5D ones require 3Ds Max (or equivalent).
One problem that arises with this approach is the combinatorial explosion in costume design (i.e. if you want animate a character in three different coats with three different hats and three different pairs of pants, you need 27 distinct animation). The solution to this, as seen in Diablo II and Baldur's Gate, is rag-dolling - you produce different sprites for every part of the body. This takes a lot of work. Blizzard made their own tools to produce their sprites, but I'm not sure there are sprite rag-dolling tools in the open.
More recently, most games are 3D. Many actually look worse than the old 2.5D ones, because a simple 3D model can animate well in sprites, but poorly in real-time 3D. The difference is that between a glamor shot of a celebrity, taken from a certain distance in certain lighting and then worked-over in photoshop, and the appearance of the same celebrity in real-life (which may not be as glamorous).
I wonder if there are 3D Object -> Sprite programs. I know of one (don't remember the name at the moment), but are there others? At the very least I'm sure there are scripts for Maya and 3ds Max that take shots of an animated 3D object from different angles. Does anyone know more on this?
To make a 2D game sprite:
Open up paint. Paint a picture. Save as a bmp. You now have a 1 frame sprite. You can add meta data to this in code if needed for hotspot, collision info, etc. If you want it to animate, create a bunch of bmps and display them 1 at a time at whatever speed you want to animate them at.
No need for a tutorial link for something like this. Or, you can download any one of thousands of sprite editors that do the above stuff in one place.

Do all developers consider monitor quality (colors, not resolution) to be irrelevant?

I especially hear it from those advocates of "business" notebooks manufactured by IBM/Lenovo, HP, Dell (maybe) that "business users do not need quality screens". They stick in the worst possible LCDs out there (even if with a high resolution) and dare to sell that crap. You can't even distinguish hue variations like light-yellow vs. light-grey.
I really miss it - do all of you agree color reproduction of a developer display is irrelevant, be it even a grayscale display it will do?
I understand most of developers work with text but... at times there is some design work to be done which is not doable on cheap LCDs.
And besides - wouldn't you enjoy fresh saturated colors even in a development environment? Bright cheerful icons on menus? Isn't it better to sit in a sunny office with green trees and flowers out of the window than in a garage with dark colors and weak artificial lighting?
P.S. Inspired by the topic about keyboards: Keyboard for programmers
The question about displays and developers really interests me since a very long time.
Even though I don't need a high quality screen, I appreciate the difference, and like esnoeijs said, an occasion will arise where I'll need to critique some graphic design work where the quality monitor will make a difference.
I think, "developer" is too broad to give a precise answer.
If you are a code-crafter of programs reading text emitting text, without the need to make some colors look nice, then yes, then you really can go with a monochrome screen. you need black as a background, white as the foreground and some reversing to highlight matching braces. In this particular case, I would value high resolutions far far more important than colors, since usually it is about seeing more code (and especially, more things about and around the current piece of code, like documentation, tests, a quick interpreter loop, some research paper, you name it).
If you are a developer just learning a language and if you have an editor with syntax highlighting, then color is a massive, massive usability leap. I would not want to miss the ability to display keywords in a bright pink, strings in a brigt cyan and similar things (all on a black background)
If you are a frontend-designer, then it is a completely different story. If you are a frontend designer, you will need a high quality display with good color display abilities. You do not need the best one possible, but your display should at least be able to display the colors your regular user will use, so you will not put in green, because you wanted blue and your users see yellow (or other nonsenses).
if you use tools that require the use of colors in order to encode information, color is crucial, because you might be unable to see the additional information.
...
So, I think, most programmers do not need some ridiculous color displaying abilities, even though, most of the time, a good solid color display is helpful, because they need to work on some frontend or because they want to learn some language.
HTH,
Tetha
Better quality color monitors can come in handy in a lot of ways. The first way that comes to mind is if you are using a code development tool that has the capability of highlighting keywords such as Zend does.
I once spent half a day trying to add zebra striping to a table in my company's webapp that already had it because both my screen and QA's screen were unable to display the different colors of the zebra stripes (they rendered as the same color). Likewise, I once had my boss ask me to change the color of part of an icon, and to me it made the icon look like a uniform blue, but on his much better monitor, you could clearly see both shades of blue and it looked really nice... it was hard to make that edit without being able to see what I was doing.
I guess the developers in my company end up doing some design work in addition to real dev. I do spend most of my time in the shell though, so aside from the constant flickering that gives me headaches (yes, it's an LCD), a low-qual monitor is OK.
I'm a developer but being in webdev land i've picked up enough design stuff to be critical about it, so i mostly try to get samsung screens with a good colour range.
With a good monitor, you can adjust it to your likings.
Personally, I have a $700-Fujitsu Siemens monitor bought in (afaik) 2000, and a $340-BenQ bought in 2005, and I prefer coding on the first monitor, as I don't have to crank up the brightness (reducing headaches) and can still see everything I want to see (subpixelhinted 6 point fonts, subtle variations in syntax highlightings etc.).
At least one author would disagree. He ranked color accuracy on four notebooks:
Lenovo ThinkPad W700
IBM/Lenovo ThinkPad T60
Dell Inspiron Mini 9
Apple late-2008 MacBook Pro 15 inch
I'm less picky about the actual monitor I have and more picky that I have two monitors that are exactly the same model and use the same video connector.
As a web developer, it can be frustrating to have colors that don't match because one of your monitors is VGA and the other is DVI.
Possibly the sort of "business user" who works on invoices all day does not need a very good display, but anyone who works on anything whose appearance counts, from software developers to business users who need to make Powerpoint presentations, does.
If you are a hardcore terminal+vim user like me, they color quality and fidelity are almost irrelevant, except for the quality of blue (which I use in some situations, like directory names) which tends to be too faint to be seen on my black background. Nothing that cannot be fixed with some tinkering though, but I am used to blue.
That said, I actually have a couple of things to say about the new screen on the macbook unibody. The glossy finish is a real pain. So annoying. And the color fidelity is very low. I spent an evening trying to understand why on a gradient from light green to white I had a pinkish stripe. Turns out that the pink is an artifact of the macbook screen. Another screen does not show the issue. On the plus side, the LED backlight is very powerful and nice, making the colors very vibrant.
This to say that color fidelity is fundamental if you use color-intensive stuff like eclipse (which communicates a lot also through different shades of colors), and of course for web frontend development. If you just need a terminal and a vim running, I don't think color fidelity makes a real difference, once you have a comfortable setup with low reflections, and a good contrast.
(note: it's been a few years since I've shopped for a monitor. this may be out of date)
I find it interesting that nobody has really defined "quality" yet, other than to say more vibrant colors. Generally, LCD panels fall into one of two tracks:
Good color/image reproduction (S-IPS panels and similar)
Good response time ( TN panels )
I consider SIPS and similar panels a must for development for one crucial reason: look angle. The image doesn't change colors or do other weird things as you angle to the screen changes. Very important for collaboration.
At the high end of this scale are monitors that are designed to perform will with color calibration. Most developers won't need anything this fancy.
TN panels are decent for gaming, movies, and other things featuring fast motion. They are optimized for pixel response time, and it's usually the main feature touted for these panels. Many cheaper panels are going to be of this variety.
In a monitor, I look for four things:
panel type (S-IPS or similar)
brightness (no more than 300cd/m2)
dot pitch (for good text, go with a small dot pitch: 0.27 is too big)
good contrast/ light leakage/ etc. (how black is black, and how uniform)
Although I love S-IPS panels, I must admit that any LCD monitor that can meet criteria 2-4 above would be a good choice, even if it's a cheaper TN panel.
It depends what you're doing.
If you're dealing processing images, yes, a good "quality" monitor is important.. but equally (or more) important to have it set-up correctly and calibrated.
If you're doing web-design, having a decent monitor is important, but again only if it's setup correctly (contrast/brightness/colour-balance).
If you're just "writing code", having a monitor your eyes like is important, the colour replication isn't important. A monochrome monitor might be stretching it, syntax-highlighting is nice, but even vim and it's 16 colours is "enough"
The term quality is also a bit "it depends" also.. CRT's have far better colour replication than TFT's, but I wouldn't recommend them (I always found reading text on them difficult, and they are hard to find, bulky and generally deprecated now).
For web-design, pretty much any monitor will be fine as long as it's not a 10 year old CRT with a broken red cathode-tube.. Again, as long as it's set-up correctly, most monitors are capable of displaying colour "good enough"
For "writing code", I think size/resolution/number-of-screens is more important than colour replication, as shown by most answers to any of these questions

3D Audio Engine

Despite all the advances in 3D graphic engines, it strikes me as odd that the same level of attention hasn't been given to audio. Modern games do real-time rendering of 3D scenes, yet we still get more-or-less pre-canned audio accompanying those scenes.
Imagine - if you will - a 3D engine that models not just the physical appearance of items, but also their audio properties. And from these models it can dynamically generate audio based on the materials that come into contact, their velocity, distance from your virtual ears, etcetera. Now, when you're crouching behind the sandbags with bullets flying over your head, each one will yield a unique and realistic sound.
The obvious application of such a technology would be gaming, but I'm sure there are many other possibilities.
Is such a technology being actively developed? Does anyone know of any projects that attempt to achieve this?
Thanks,
Kent
I once did some research toward improving OpenAL, and the problem with simulating 3D audio is that so many of the cues that your mind uses — the slightly different attenuation at various angles, the frequency difference between sounds in front of you and those behind you — are quite specific to your own head and are not quite the same for anyone else!
If you want, say, a pair of headphones to really make it sound like a creature is in the leaves ahead and in front of the character in a game, then you actually have to take that player into a studio, measure how their own particular ears and head change the amplitude and phase of the sound at different distances (amplitude and phase are different, and are both quite important to the way your brain processes sound direction), and then teach the game to attenuate and phase-shift the sounds for that particular player.
There do exist "standard heads" that have been mocked up with plastic and used to get generic frequency-response curves for the various directions around the head, but an average or standard will never sound quite right to most players.
Thus the current technology is basically to sell the player five cheap speakers, have them place them around their desk, and then the sounds — while not particularly well reproduced — actually do sound like they're coming from behind or beside the player because, well, they are coming from the speaker behind the player. :-)
But some games do bother to be careful to compute how sound would be muffled and attenuated through walls and doors (which can get difficult to simulate, because the ear receives the same sound at a few milliseconds different delay through various materials and reflective surfaces in the environment, all of which would have to be included if things were to sound realistic). They tend to keep their libraries under wraps, however, so public reference implementations like OpenAL tend to be pretty primitive.
Edit: here is a link to an online data set that I found at the time, that could be used as a starting point for creating a more realistic OpenAL sound field, from MIT:
http://sound.media.mit.edu/resources/KEMAR.html
Enjoy! :-)
Aureal did this back in 1998. I still have one of their cards, although I'd need Windows 98 to run it.
Imagine ray-tracing, but with audio. A game using the Aureal API would provide geometric environment information (e.g. a 3D map) and the audio card would ray-trace sound. It was exactly like hearing real things in the world around you. You could focus your eyes on the sound sources and attend to given sources in a noisy environment.
As I understand it, Creative destroyed Aureal by means of legal expenses in a series of patent infringement claims (which were all rejected).
In the public domain world, OpenAL exists - an audio version of OpenGL. I think development stopped a long time ago. They had a very simple 3D audio approach, no geometry - no better than EAX in software.
EAX 4.0 (and I think there is a later version?) finally - after a decade - I think have incoporated some of the geometric information ray-tracing approach Aureal used (Creative bought up their IP after they folded).
The Source (Half-Life 2) engine on the SoundBlaster X-Fi already does this.
It really is something to hear. You can definitely hear the difference between an echo against concrete vs wood vs glass, etc...
A little known side area is voip. While games are having actively developed software, you are likely to spent time talking to others while you are gaming as well.
Mumble ( http://mumble.sourceforge.net/ ) is software that uses plugins to determine who is ingame with you. It will then position its audio in a 360 degree area around you, so the left is to the left, behind you sounds like as such. This made a creepily realistic addition, and while trying it out it led to funny games of "marko, polo".
Audio took a massive back turn in vista, where hardware was not allowed to be used to accelerate it anymore. This killed EAX as it was in the XP days. Software wrappers are gradually getting built now.
Very interesting field indeed. So interesting, that I'm going to do my master's degree thesis on this subject. In particular, it's use in first person shooters.
My literature research so far has made it clear that this particular field has little theoretical background. Not a lot of research has been done in this field, and most theory is based on movie-audio theory.
As for practical applications, I haven't found any so far. Of course, there are plenty titles and packages which support real-time audio-effect processing and apply them depending on the general surroundings of the auditor. e.g.: auditor enters a hall, so a echo/reverb effect is applied on the sound samples. This is rather crude. An analogy for visuals would be to subtract 20% of the RGB-value of the entire image when someone turns off (or shoots ;) ) one of five lightbulbs in the room. It's a start, but not very realisic at all.
The best work I found was a (2007) PhD thesis by Mark Nicholas Grimshaw, University of Waikato , called The Accoustic Ecology of the First-Person Shooter
This huge pager proposes a theoretical setup for such an engine, as well as formulating a wealth of taxonomies and terms for analysing game-audio. Also he argues that the importance of audio for first person shooters is greatly overlooked, as audio is a powerful force for emergence into the game world.
Just think about it. Imagine playing a game on a monitor with no sound but picture perfect graphics. Next, imagine hearing game realisic (game) sounds all around you, while closing your eyes. The latter will give you a much greater sense of 'being there'.
So why haven't game developers dove into this full-hearted already? I think the answer to that is clear: it's much harder to sell. Improved images is easy to sell: you just give a picture or movie and it's easy to see how much prettier it is. It's even easily quantifyable (e.g. more pixels=better picture). For sound it's not so easy. Realism in sound is much more sub-conscious, and therefor harder to market.
The effects the real world has on sounds are subconsciously percieved. Most people never even notice most of them. Some of these effects cannot even conciously be heard. Still, they all play a part in the percieved realism of the sound. There is an easy experiment you can do yourself which illustrates this. Next time you're walking on the sidewalk, listen carefully to the background sounds of the enviroment: wind blowing through leaves, all the cars on distant roads, etc.. Then, listen to how this sound changes when you walk nearer or further from a wall, or when you walk under an overhanging balcony, or when you pass an open door even. Do it, listen carefully, and you'll notice a big difference in sound. Probably much bigger than you ever remembered.
In a game world, these type of changes aren't reflected. And even though you don't (yet) consciously miss them, your subconsciously do, and this will have a negative effect on your level of emergence.
So, how good does audio have to be in comparison to the image? More practical: which physical effects in the real world contribute the most to the percieved realism. Does this percieved realism depend on the sound and/or the situation? These are the questions I wish to answer with my research. After that, my idea is to design a practical framework for an audio engine which could variably apply some effects to some or all game audio, depending (dynamically) on the amount of available computing power. Yup, I'm setting the bar pretty high :)
I'll be starting per September 2009. If anyone's interested, I'm thinking about setting up a blog to share my progress and findings.
Janne Louw
(BSc Computer Sciences Universiteit Leiden, The Netherlands)

Resources