Line of Sight blockage using bitmaps - rasterizing

I'm building a simulation model for a vehicle. I must determine if the vehicle's line of sight is being blocked by other vehicles in the sim.
(There is no visual display for our sim, it's purely for calculations.)
One idea is to generate a view with the camera at the line of sight's origin and orientation. Then I rasterize the scene into a black and white bitmap, with black meaning blocked, and white meaning clear.
Does this seem feasible?

Is this in 2d or 3d? Also, have you considered using an existing simulation frame such as Gazebo or Microsoft Robotics?

Related

Erase Pixels From Sprite Cocos2d-JS

I'm getting the feeling this won't be possible, but worth asking anyway I guess.
I have a background sprite and a foreground sprite, both are the same size as the window/view.
As the player sprite moves across the screen I want to delete the pixels it touches to reveal the background sprite.
This is not just for display purposes, I want the gaps the player has drawn or "dug" out of the foreground layer to allow enemies to travel through, or objects to fall into. So hit detection will be needed with the foreground layer.
This is quite complex and maybe Cocos2D-JS is not the best platform to use, if not possible could you recommend another which would be easier to achieve this effect with?
I believe it's possible, but I'm not capable of giving you a proper answer.
All I can say is that you'll most likely have two choices:
a. Make a physics polygonal shape and deform it, then use it as a "filter" to display your terrain image (here's a proof of concept example in another language using box2d).
b. Directly manipulate pixels and use a mask for collision detection (here's pixel-perfect collision detection in cocos2d-js, sadly I've got no info in modifying pixels).

How to know which triangle contribute to the color of a pixel?

I'm total new in graphics and DX, encountered a problem and no one around me know graphics too. Sorry if the question seems too naive.
I use DirectX 11 to render a mesh, and I want to get a buffer for each pixel. This buffer should store a linked-list (or some other structure) of all triangles that contribute color to this pixel.
Should I operate on which shader or which part of DX? Or simply, where could I get the triangle information in pixel shader?
You can write the triangle ID in the pixel shader but using the hardware z-buffer you can only capture one triangle per pixel.
With multisampled textures you can capture more triangles. This should be enough in practical situations.
If your triangles are extremely small and many of them are visible within one pixel then you should consider the A-Buffer with your own hidden surface removal algorithm.
If you need it only for debug purposes you can use any of graphics debuggers:
Visual Studio Graphics Debugger (integrated since Visual Studio 2012)
For AMD GPUs: GPUPerfStudio
For NVidia GPUs: Nsight
Good old PIX from DX SDK.
If you need it at runtime (BTW, why? =) )
Use System-Generated Values: VertexID, PrimitiveID and SV_VertexID to calculate exact primitive or even vertex, that contributed in pixel color. It is tricky, but possible.
Another way is to use some kind of custom triangle ID in vertex declaration. But be aware of culling.
You can output final data from pixel shader into buffer, then read from it on CPU.
All of such problems are pretty advanced topics in DirectX. I'm not sure if "total new in graphics and DX" coder can solve it.

"Virtual screens" in 3D engines (displaying on a wall another portion of the 3D world)

To clarify the technical problem i have, i want to describe the scene i have in mind:
In a 3D computer simulation, I want to build a kind of cabin (cube form) that stands isolated in a large plane. There's 1 door to enter the cabin. Next to this door I want to show a movie playing (avi file or something) on the wall of the cabin.
If you enter the cabin, on all 4 sides I want to show a virtual 3D landscape projection that is based on the input of the video projected outside: every pixel in the video will be represented as a cube (rgb -> height width depth). The resulting landscape of cubes needs to be projected on the inside walls of the cabin. And as a user, you will not be able to walk into this projection (it's a virtual window, not a portal).
Technically, for me this translates into these questions: i want to
display a movie inside the 3D world on a wall
access the pixel data of this movie
transform on the fly these pixels into 3D representation of cubes
show these cubes as a virtual projection on a wall in the game. (as a kind of visual teleport that you can't cross)
I was wondering which 3d engine would allow this? I don't mind any programming language. I'm fluent in mono/.net or java, but i can manage c++ or other languages (as long as the engine is well documented).
Kind Regards,
Ruben.
ps:
I don't know if this question is of interest to anybody else. At least not in the functional kind of way. But maybe it triggers a hypothetical interest :)
Any engine that supports dynamic texture maps and multiple viewports (rendering surfaces).
render the scene you want on your wall
texture wall with the output of 1
render your room scene
Many engines support this. The Unreal Tournament Engine (UT2004) supports this, as evidenced by the dynamic texture on carried sniper scopes (example, Killing Floor). Security camera screens in Half-life 2 do this as well (source engine).

how to draw several shape (circle, line, rec, etc.) with more control?

I know there is a nice class Graphics with basic api like drawLine, drawRect. But I need more control to set pixel size, wide, thick, thin, lines in my shape. My intention is to draw a dynamic shape (similar to attached image) depending on different criteria.
I'm new in J2ME. Any other suggestion to achieve my goal is appreciated. Thanks!
There is no way to set line thickness in J2ME.
However, you can try some workarounds:
To simulate thick lines, you can just draw multiple lines.
And to draw a thick circle you can draw a larger filled circle and then a smaller one inside it.
For dotted lines use setStrokeStyle.
If your target devices are Nokia than you can use drawPixels(...) and drawPolygon(...) in conjunction with MIDP graphics methods drawLine(...), drawRect(...) and drawArc(...). To achieve your goal. The drawPixels(...) is a very powerful method in the sense you can draw virtually any custom shape you would like. I know of SonyEricsson that supports nokia UI api's but with "strings attached".
More descriptive information can be found at this link.
.
If your target devices are not just Nokia than i would suggest you find / do-self port of Nokia UI class DirectGraphics. There are no ODM specific libraries the way Nokia have it.

Suggestion for graphics library for 2D game (PC)

I'm trying to set base to a 2D game with destructible terrain and/or particle effects, scroll, zoom, characters, etc... I'd like to know if there is a graphics library that would support those things in both software and hardware acceleration (need pixel access). I've tried SDL (even with DirectX back-end), but it seems hardware does its job only in full screen. I'd appreciate any suggestion.
Use OpenGL. Perhaps via another library e.g. SDL. I do not know why you can't get windowed HW acceleration working, it might be a platform thing (but it's certainly a different question).
Set the projection matrix to orthographic and use one of the axis (typically z) to organise 'stacking' elements. With an appropriate transformation in the display subroutine, you can align the x/y coordinates with "traditional" drawing (i.e., top-left down, rather than bottom-left up).
Build your graphical elements into bitmaps, convert them into textures and draw them on top of OpenGL Rects.

Resources