View Interpolation? - graphics

Given images from a certain viewpoints is there some software out there which can help me interpolate the views(i.e. Viewpoint interpolation software?).
Thanks,

View Interpolation is an illposed and very difficult problem, which in general has no optimal solution in real world scenarios.
This is because of several reasons, some of which include
Wide Baseline Matching
Disparity Estimation
Occlusion Handling (Folds and Holes)
The quality of the outcome highly depends on your video footage, especially on how close together the cameras are.
Nevertheless, research is ongoing. Dyer and Seitz for example achieved nice results on constrained examples:
http://homes.cs.washington.edu/~seitz/vmorph/vmorph.htm
Stich et.al. from TU Braunschweig showed some amazing results with their Virtual Camera system, which probably is the one thing, you are looking for:
http://www.youtube.com/watch?v=uqKTbyNoaxE
And finally, for soccer enthusiasts:
http://www.youtube.com/watch?v=cUK1UobhCX0
http://www.youtube.com/watch?v=UePrOp2s31c
As a starting point, look for Image Morphing and View Morphing.
If you want, I can provide some more papers on the topic.
Good luck! :)
EDIT: Also, if you're interested, here's my work on Spatial and Temporal Interpolation of Multi-View Image-Sequence.
http://tobiasgurdan.de/research/

Related

How to amplify certain audio samples, particularly amplifying a certain frequency?

Can anyone provide sample pseudocode or share some existing link that has sample code.
Like for example I have a mix audio of 1kHz or 2kHz or 8kHz or so, and I want to boost certain frequencies like 1kHz only in real-time.
Reading some DSP books and resources confuses me.
You just need to design and implement a suitable digital filter. This is a large and complex subject area though, so you won't get a simple answer here. Probably the best thing as a first step would be to read a good introductory book on DSP, e.g. Understanding DSP by Rick Lyons, which is a very good for beginners as it's not too heavy on the math and has a more practical bent than most such introductory DSP books.
For this particular application though what you are trying to do is similar to implementing a graphic equalizer, and there are many pointers to how to implement this kind of thing if you use e.g. "graphic equalizer" as a search term.
There's a lot of math behind digital filtering. Sorry, I think it is important to at least understand basic filters (like those used in electronics). If you don't want to go through the basics: best to get an audio graphics equaliser where you can play with the (virtual) sliders. If you want to implement a very specific filter, please read on.
Real time: depends on your computing platform. If this is a small micro (like AVR, Microchip PIC,..) you'll need an efficient algorithm. This is likely a IIR band pass filter. The equivalent of a graphics equaliser consists of multiple band pass filters, all summed together. See http://en.wikipedia.org/wiki/Infinite_impulse_response
A more computing intensive algorithm uses FIR filters. In that case you can also control the phase of the filtered signal. http://en.wikipedia.org/wiki/Finite_impulse_response
If you find an algorithm (i.e. IIR), you'll need to calculate the coefficients. The algorithm is simple, calculating the coefficients is not.
I found a book matching your question: Audio digital signal processing in real time
I browsed through it; it seems to have the right answers.

How does pathfinding in RTS video games work?

In a game such as Warcraft 3 or Age of Empires, the ways that an AI opponent can move about the map seem almost limitless. The maps are huge and the position of other players is constantly changing.
How does the AI path-finding in games like these work? Standard graph-search methods (such as DFS, BFS or A*) seem impossible in such a setup.
Take the following with a grain of salt, since I don't have first-person experience with pathfinding.
That being said, there are likely to be different approaches, but I think standard graph-search methods, notably (variants of) A* are perfectly reasonable for strategy games. Most strategy games I know seem to be based on a tile system, where the map is comprised of little squares, which are easily mapped to a graph. One example would be StarCraft II (Screenshot), which I'll keep using as an example in the remainder of this answer, because I'm most familiar with it.
While A* can be used for real-time strategy games, there are a few drawbacks that have to be overcome by tweaks to the core algorithm:
A* is too slow
Since an RTS is by definion "real time", waiting for the computation to finish will frustrate the player, because the units will lag. This can be remedied in several ways. One is to use Multi-tiered A*, which computes a rough course before taking smaller obstacles into account. Another obvious optimization is to group units heading to the same destination into a platoon and only calculate one path for all of them.
Instead of the naive approach of making every single tile a node in the graph, one could also build a navigation mesh, which has fewer nodes and could be searched faster – this requires tweaking the search algorithm a little, but it would still be A* at the core.
A* is static
A* works on a static graph, so what to do when the landscape changes? I don't know how this is done in actual games, but I imagine the pathing is done repeatedly to cope with new obstacles or removed obstacles. Maybe they are using an incremental version of A* (PDF).
To see a demonstration of StarCraft II coping with this, go to 7:50 in this video.
A* has perfect information
A part of many RTS games is unexplored terrain. Since you can't see the terrain, your units shouldn't know where to walk either, but often they do anyway. One approach is to penalize walking through unexplored terrain, so units are more reluctant to take advantage of their omniscience, another is to take the omniscience away and just assume unexplored terrain is walkable. This can result in the units stumbling into dead ends, sometimes ones that are obvious to the player, until they finally explore a path to the target.
Fog of War is another aspect of this. For example, in StarCraft 2 there are destructible obstacles on the map. It has been shown that you can order a unit to move to the enemy base, and it will start down a different path if the obstacle has already been destroyed by your opponent, thus giving you information you should not actually have.
To summarize: You can use standard algorithms, but you may have to use them cleverly. And as a last bonus: I have found Amit’s Game Programming Information interesting with regard to pathing. It also has links to further discussion of the problem.
This is a bit of a simple example, but it shows that you can make the illusion of AI / Indepth Pathfinding from a non-complex set of rules: Pac-Man Pathfinding
Essentially, it is possible for the AI to know local (near by) information and make decisions based on that knowledge.
A* is a common pathfinding algorithm. This is a popular game development topic - you should be able to find numerous books and websites that contain information.
Check out visibility graphs. I believe that is what they use for path finding.

Procedural sound generation algorithms?

I'd like to be able to algorithmically create sounds (like monster growls, or distant thunder.) This isn't as widely covered on the net like more traditional procedural content (terrains, etc.) Any one have any algorithms on how to create these kinds of sounds?
This, in general, is a very hard problem. Just like drawing, each sound is its own thing, and needs its own algorithms, and, like drawing, some are more easily done by algorithm than others. There's no general algorithm for creating sound any more than there's a general algorithm for drawing all things like faces, insects, and mountains. Each is it's own project (and often quite a big one), unless you're just looking to draw circles or generate sine waves.
Most of the case studies I know of are the many attempts to generate musical instrument sounds, and generally each of these attempts is a PhD thesis.
For a time-efficient solution, sampling is the way to go.
Or, if you really need a procedural approach, you could ask the question for one specific type of sound, and people might be able to come up with an algorithm for it. For example, I'd be interested in taking a shot at a "distant thunder" algorithm, but don't want to bother if having just thunder but no monsters, etc, is not useful to you.
I would suggest checking out the many software projects and papers of Perry Cook who has done some great work in the realm of physical modelling (though his website is a bit of a nightmare to navigate). Though as tom10 says, it's a very hard area. If you have the stomach for a bit of signal processing then it's a very fascinating area to get into.

Graphics/Vision Interesting Topics

I would like to do an interesting project for a computer graphics course. I know that there is a lot of literature out there (i.e. SIGGRAPH conference papers). I have a very large range of interest with regard to computer graphics (i.e. image processing, 3D modeling, rendering, animation). However, I've only taken computer vision/graphics for 2 semesters and thus don't have too much background experience, except for the class projects that I had to do.
I've been looking through SIGGRAPH papers trying to see if there is anything that will be of interest to me but the literature is extremely vast. I was wondering if anyone has any topic suggestions, anything interesting that you ran across that you could recommend. I would prefer to do something fun yet slightly challenging (not really interested in making a shooter game).
If this question does not belong here, I apologize and please let me know where I should move it.
Thanks!
Image Drawing animator. (the name is kind of misleading, but I didn't care much about it)
Anyway, the software does the following:
Takes an image say a JPEG or BMP as input.
Extract the lines from the image. (I used Matlab and Laplace transformations)
Convert the static lines to Vector paths.
Simulate drawing the image using the extracted paths.
In summary, you should give an image, for example a city scape, the program extract all lines and start drawing the buildings, streets and sunset lines, then finally add the colors one by one, until the full image is done.
Real time hand(s) detector.
You'll have plenty of interesting and fun applications with this.

How does the "Unlimited Detail" graphics technology work?

So I stumbled upon this "new" graphics engine/technology called Unlimited Detail.
This seems to be pretty interesting granted it's real and not a fake.
They have some videos explaining the technology but they only scratch the surface.
What do you think about it? Is it programmatically possible?
Or is it just a scam for investors?
Update:
Since the only answer was based on voxels I have to copy this from their site:
Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine
The underlying technology is related to something called sparse voxel octrees (see, e.g., this paper), which aren't anything incredibly amazing. What the video doesn't tell you is that these are not at all suited for things that need to be animated, so they're of limited use for anything that uses procedural animation (e.g., all ragdoll physics, etc.). So they're very inflexible. You can get great detail, but you get it in a completely static world.
A rough summary of where things stand with this technology in mainstream games is here. You will also want to check out Samuli Laine's work; he's a Finnish researcher who is focusing a great deal of his attention on this subject and is unlocking some of the secrets to implementing it well.
Update: Yes, the website says it's not "voxel-based". I suspect this is merely an issue of semantics, however, in that what they're using are essentially voxels, but because it's not exactly a voxel they feel safe in being able to claim that it's not voxel-based. In any case, the magic isn't in how similar to a voxel it is -- it's how they select which voxels to actually show. This is the primary determinant of speed.
Right now, there is no incredibly fast way to show voxels (or something approximating a voxel). So either they have developed a completely new, non-peer-reviewed method for filtering voxels (or something like them), or they're lying.
You might find more detail in the following patents:
"A Computer Graphics Method For Rendering Three Dimensional Scenes"
"A Method For Efficent Streaming Of Octree Data For Access"
- Each voxel (they call it a "node") is represented as a single bit, along with information voxels at a finer level of detail.
The full-text can be viewed online here:
https://www.lens.org/lens/search?q=Euclideon+Pty+Ltd&l=en
or
http://worldwide.espacenet.com/searchResults?submitted=true&query=EUCLIDEON

Resources