I'd like to make a scene that uses meshes and primitives (such as spheres, cylinders, boxes, etc). I was wondering if there are any recommendations with regard to where I can go to find .off files that describes complex meshes, such as mountains, rocks, trees, animals, etc.
There are plenty of sites that share/sell models, like turbosquid.
You can use Blender to import many 3d formats(3ds,obj,lwo,ma,etc.) and export your mesh(es) to .off format.
.obj is highlighted, but you can see DEC Object File Format (*.off) on the list.
Related
After some search I've learned it is possible to create multiple Vertex Buffers, each for a specific 3D model, and set them in the Input Assembler to be read by my shaders, or at least this is what I could understand. But by reading Microsoft's documentation I've got very confused of how to do this the right way, this is what I was reading, and they say I can pass in an array of Vertex Buffers to the IA stage, but it also says that the maximum number of Vertex Buffers my Input Assembler can take in D3D11 is 32. What would I do if I needed 50 different models being rendered at the same time? And also if someone could clarify how the pOffset work in this situation with multiple models would also help, as I could understand it should always be assigned a 0 value as the beginning of my buffers is always the vertex data, but I could've understood wrong. And by last I want to add I've already rendered some buffers which consists of multiple models together, but I don't know exactly how could I deal with many individual models.
The short answer is: You don't try to draw all your models in one Draw call.
You are free to organize rendering in many ways, but here is one approach:
A 'model' consists of a one or more 'meshes'. Each mesh is collection of a vertices (in a VB), indices (in an IB), and some material information associated with each 'subset' of indices.
To draw:
foreach M in models
foreach mesh in M
foreach part in mesh
Set shaders based on material
Set VB/IB based on mesh
DrawIndexed
Since this is a number of nested loops, there are several ways to improve the performance. For example, you might just queue up the information instead of actually calling DrawIndexed, then sort by material. Then call DrawIndexed from the sorted queue.
For alpha-blending to appear correct, you have to do at least two rendering passes: First to render opaque things, then the second to render alpha-blended things.
You may also want to combine all the content in a given model into one VB and one IB with offsets rather than use individual resources.
You may have the same model in multiple locations in the world, so you may have many model instances sharing the same mesh data. In this case, sorting by VB/IB as well as material could be useful. If you are drawing the same model in many locations (100s or 1000s), then you should to look into hardware instancing.
An example implementation of this can be found in DirectX Tool Kit as Model, ModelMesh, and ModelMeshPart.
I'm taking an introductory graphics course, and while I intuitively understand that converting a click or touch into object coordinates will make the math much cleaner, reduce the chances for human error, and potentially make debugging easier, none of these are actually a very good explanation, conceptually, of why object coordinate spaces are used in selection tests, as opposed to simply using world coordinates for the test - rather, they're just observations of what tends to happen when object coordinates are used. So I ask: why?
A selection test involves comparing the click coordinates, which you get in window coordinates, against lots and lots of object features, which are represented in object coordinates.
You need to transform them into the same coordinate system in order to do the checks, so you can EITHER transform the one simple click point OR you can transform all the various object features.
Transforming one point or line is just a lot easier that transforming a whole bunch of object features of various types.
There are cases where the location of a specific object or point may not be known within a world coordinate system, but is known relative to some other coordinate system.
To summarize an example from my course text, consider the idea of two different towns, one using a grid system for its layout, and the other using what I can only describe as the New England we-made-cow-trails-into-roads method. A government employee is tasked with creating a layout of the area which includes them, and in doing so has to convert the two coordinate systems into a third, which encompasses the other two.
Sometimes, using a world atlas just isn't practical to get across the street, and so something much more local (and relevant) is used instead, as it provides much more detail over a much smaller area.
The text also explains that it may be more than simply impractical to use a given coordinate system - it may yield results that are improbable or just plain wrong. This is evidenced in the evolution of the geocentric and heliocentric models of the universe - the distance of the stars from us was calculated with very different results using the two models.
Thinking of my own example, the best that comes to mind would be something like your own internal organs - from the outside, you don't know for sure exactly the shape, size, and structure of each of them, but your own body does. In order to be able to access that information, you need to look inside the body (ideally in a way that doesn't kill you). It's not something that is plainly observable from outside.
I am trying to do an image manipulation wherein the user would be prompted to enclose the mouth portion within an image. Once the user does that my application should identify the pixels that would identify the teeth (the color varying from white to yellow) and then I would like to brighten only those pixel. Could anyone give me a guidance on how to proceed?
Your question is quite honestly, very broad as an adequate answer will touch on a large number of areas.
Nevertheless, what you are trying to attempt is called Pattern Recognition. More specifically, your problem is geared towards image-analysis, dealing mainly in Template Matching:
Template matching is a technique in digital image processing for
finding small parts of an image which match a template image. It can
be used in manufacturing as a part of quality control, a way to
navigate a mobile robot, or as a way to detect edges in images.
The Template Matching page has a C-like language sample algorithm which demonstrates what you are attempting to do (identify a specific color within an image).
As for how to go about this, generally speaking you will have to load an image, store it into an array then try to manipulate it as the algorithm suggests:
One way to perform template matching on color images is to decompose
the pixels into their color components and measure the quality of
match between the color template and search image using the sum of the
absolute differences (SAD) computed for each color separately.
Of course, there are numerous projects in various languages that do that for you. My suggestion is to read up a bit more on the topic, pick a language, and attempt a solution using libraries as necessary.
One book that you might find to be very helpful is the classic Phillips: Image Processing in C even if you don't want to use C. Why? Because it pores over a lot of the algorithmic details in how they work, and how to implement them. And, its free too.
I am working on a GE local bike mapping project, and I am using placemarks to create labels with linked descriptions for the roads/trails on the map. While it is nice for the user to click a placemark/label on the map for a description; as the map has grown, the labels can also create visual clutter. The placemarks for the label/descriptions are currently stored with the lineStrings in folders for each road.
It would be nice to be able to turn all of the labels off or on without opening each of the separate road folders to de/select each one. The names of most of the roads are also available on the underlying Google Earth hybrid layer, so the labels and descriptions are helpful but not absolutely necessary.
Download https://sites.google.com/site/tuobikes/kml/hullcrabtree.kmz for an example.
Is there any way to define a set of placemarks as a subtype in order to turn them all on or off as a group? For example, placemark type=label or placemark type=photo... This seems like useful functionality, but I don't see it in the kml reference
Is storing the placemarks for the labels/descriptions together in a folder separate from the lineStrings for the roads the only way to solve the problem?
I don't know of any way other than a Folder to turn on/off a group of related placemarks.
That said, consider using one or more KML Regions to reduce clutter:
"Regions are a powerful KML feature that allows you to add very large datasets to Google Earth without sacrificing performance. Data is loaded and drawn only when it falls within the user's view and occupies a certain portion of the screen. Using Regions, you can supply separate levels of detail for the data, so that fine details are loaded only when the data fills a portion of the screen that is large enough for the details to be visible."
I need to create whole-sky maps, but Google Earth in Sky mode has limited "zoom out." Is it possible to create an orthographic map projection with Google Earth (or, for that matter, MS Virtual Earth)? Really any of the standard projections will do, although it would be nice to have options.
Of course, I could render the projection statically and then paint my KML layer on top of it, but the ideal use-case would allow the user to add additional KML layers, zoom in and out, etc.
No they don't. This was a bone of contention in a series of articles I wrote just over a year ago. I wasn't after an orthographic projection, but equal area projections. People insist on plotting geostatistical data on non-equal area projections, mainly because they don't know any better and/or Google Maps, Bing Maps, etc do not offer any alternatives.
my demo of how it should be done used OpenLayers which can support various different map projections.
I haven't seen OpenLayers being used for star maps, but I can't see any fundamental reasons why it should not work. Also the star data is generally available in the public domain, I believe.