Efficient Random Texture Sampling in OpenGL ES 2.0 [closed] - graphics

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Is there any efficient way to fetch texture data in a random way? That is, I'd like to use a texture as a look-up table and I need random access to its elements. Therefore I'd be sampling it in a random fashion. Is it a completely lost cause?

Random access is a basic feature of GLSL. E.g.
vec2 someLocation = ... whatever you like ...;
vec4 sampledColour = texture2D(sampler, someLocation);
Depending on your hardware, it may cost more to read a texture if you've calculated the sample locations directly in the pixel shader rather than out in the vertex shader and allowed them to be interpolated automatically as a varying, but that's just an immutable hardware cost relating to the decreased predictability of what you're doing.

You could always pass another texture to the shader containing random values and sample from that. That will give you the same random value for each texture coordinate but if you dont want that you can always multiply the coordinate by a uniform seed that you updated each frame.

Related

how to explain this decision tree interpretability question?

enter image description here
enter image description here
The 2 pictures above shown the 2 decision tree....
Question is: It is often claimed that a strength of decision trees is their interpretability.
Is this always justified? Refer to Figures 5 and 6 to help with your answer.
I think the point of the question is saying that a decision tree is interpretable if its depth is relatively small. The second tree is very deep i.e for one single prediction, you will get a high number of different splitting decisions to process. Therefore, you lose interpretability because the explanation for any prediction is an intersection of too many conditions for a human-user to process.

Calculating Vertex Normals of a mesh [duplicate]

This question already has answers here:
Calculating normals in a triangle mesh
(6 answers)
Closed 9 years ago.
I have legitimately done every research possible for this, and it all just says to simply calculate the surface normals of each adjacent face.
Calculating surface normals is easy, but how the heck do you find the adjacent faces for each vertex? What kind of storage do you use? Am I missing something? Why is it so easy for everyone.
Any guidance would be greatly appreciated.
but how the heck do you find the adjacent faces for each vertex?
Think it otherway round: Iterate over the faces and add to the normal of the vertex. Once you processed all faces, normalize the vertex normal to unit length. I described it in detail here
Calculating normals in a triangle mesh
If you really want to find the faces for a vertex, the naive approach is to perform (linear) search for the vertex in the list of faces. A better approach is to maintain an adjancy list.

Largest file size for a JPEG file with a fixed resolution [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Is there a way to calculate the largest possible file size for a JPEG image with a fixed resolution?
For example, is it possible to say that a 1024x768 image has a maximum file size of 3MB?
No there is not. It uses quantization matrices to try and reduce frequencies to 0, depending on how good that works, and how much of a pattern occurs for all these values, the compression becomes more efficient.
See JPEG Wikipedia Article, section Codec Example for more details on how the compression works. It should become clear from that that it is not possible.
Not really. JPG compression depends on quality settings, and the content of the image. A single solid color "tile" will compress far better than a "busy" image.
E.g. a solid white 800x600 image saved in The Gimp at 85% quality is a 3,155 bytes .jpg file. Filling that same 800x600 image with the RGB noise filter produces a 134,935 byte .jpg.

Untrained sentiment analysis, need help with capturing sentiment variation statistically [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
The question may be vague but I will try to word it as best as possible.
So I came up with a crude algorithm to compute whether a sentence (part of a review snippet) is positive or negative or neutral (let's call this EQ for the sentence). So for 5 sentences I have some ratings for sentence based on [-100, 100]. The review has to be rated on [0, 5] basis
(0, 39.88)
(1, 73.07)
(2, 69.65)
(3, 51.43)
(4, 76.74)
The choice that I am struggling with is what method should I choose to now compute the overall rating for the review snippet.
I researched a little and tried two options
1) 50% Percentile: for above data point I got it as 70. So mapping it on 0-5 scale turns out 4.2. Results are good but the sad part is that percentile doesn't capture how the EQ varied in the snippet from one sentence to another (since it works on sorted data so the variation is lost).
2) Lagrange Polynomial: Here it came close to 69. But the prob with this approach is that I often calculate it for mid of the X-range (in this case 2) so as such this too doesn't capture the variation in EQ of the sentence (here end points do not matter, it would mostly give mid range value).
Any ideas, what method should I choose which can capture the EQ variation in the snippet and give an appropriate value which can be used to get overall sentiment.?
Probably something like excel draws trendline, prob that can be used ??
If you are interested in untrained/unsupervised sentiment analysis, read this classic paper by Peter Turney which uses an unsupervised approach achieving an accuracy of around 75% - http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=8914166
Sentiment Analysis is fun!

what are sparse voxel octrees?

I have reading a lot about the potential use of sparse voxel octrees in future graphics engines.
However I have been unable to find technical information on them.
I understand what a voxel is, however I dont know what sparse voxel octrees are or how are they any more efficient than the polygonal techniques in use now.
Could somebody explain or point me to an explanation for this?
Here's a snippet about id Software on this subject.
id Tech 6 will use a more advanced technique that builds upon the MegaTexture idea and virtualizes both the geometry and the textures to obtain unique geometry down to the equivalent of the texel: the Sparse Voxel Octree (SVO).
It works by raycasting the geometry represented by voxels (instead of triangles) stored in an octree.
The goal being to be able to stream parts of the octree into video memory, going further down along the tree for nearby objects to give them more details, and to use higher level, larger voxels for further objects, which give an automatic level of detail (LOD) system for both geometry and textures at the same time.
Also here's a paper on this.
Found more information in this great blog entry.
Well, voxels alone are not that interesting, because for any reasonably detailed modeled, you would need extremely huge amounts of voxels (if using an uniform grid).
So, a hierarchical system is needed, which brings us to octrees. An octree is a very simple spatial data structure, which subdivides each node into 8 equally large subnodes.
A sparse octree is an octree where most of the nodes are empty, similar to the sparse matrices that you get when discretizing differential equations
an octree has 8 neighbors because if you imagine a square, that was cut into 4 equal quarters like so
______________
| | |
| | |
|_____|______|
| | |
| | |
|_____|______|
then it would be a "quad"(four)-tree.
but in 3 dimensions, you have yourself, a cube, rather then a square, so cutting it horizontally, vertically, and along the Z axis, you'll find 8 chunks rather then 4 like so
_____________
/ / / |
/-----/-----/ |
/_____/_____/ | |
| | | |/|
|-----|-----|/| |
| | | |/
|_____|_____|/
hope that makes since..
what makes the SVO unique, is that it stores Voxel information, which is a point in space, that has properties such as Color, Normal, etc..
the idea behind SVO is to ignore triangles, and the need of textures, by putting them together into a single SVO which contains the Voxelized Triangle Hull(the Model), and its surface textures all in one object.
The reason a Octree is needed here, is that otherwise a uniform grid structure would require far to much memory for existing graphics cards to handle..
so using the SVO allows for a sort of Mip-Mapped 3D Texture..
MipMapping basically is the same image, but at difference scales, one which has more detail, and the latest which has the least detail(but look fairly similar from a distance)
that way near objects can stream from the SVO with greater detail, while further objects stream with less detail.. that is if you're using Ray-Casting.. the further away the ray from the camera, the less we dig into our Mega-Texture/SVO
But, if you think outside the box like "Euclideon" with its "unlimited-detail", you would just use frustum slicing, and plane/aabb intersection, with projected UV of our sliced billboard for finding each texels color on the screen, opposed to Width*Height pixels, shooting out rays, with nvidia's naive "beam optimizations".
PS(sorta off topic): for anyone who doesn't understand how Euclideon does their shi, I believe thats the most practical solution, and I have reason to back it up(that they DO NOT use ray casting)
The biggest mystery they have, isn't rendering, but storing their data.. RLE simply doesn't cut it.. because some volume/voxel data may be more random, and less "solid" where RLE is usless, also compression of which for me typically requires at least 5 bytes into anything less. they say they output roughly half of what is put in) through their compression.. so they're using 2.5 bytes, which is about the same as a Triangle now-adays
A NVIDIA whitepaper named Efficient Sparse Voxel Octrees – Analysis, Extensions, and Implementation
describes it very detailed here
actually, the 1.15 bits make me suspect they just store things sequentially, in some brilliantly simple way. that is, if they're only storing the volume data and not things like colour or texture data as well.
think about it like this: 1 voxel only needs to be 1 bit: is it there or is it not there? (to be or not to be, in other words :P). the octree node it's in is made of 8 voxels and a bit to store whether the thing contains anything at all. that's one bit per voxel plus one per 8. 1 + 1/8 = 1,125. add another parent node with 7 siblings and you get 1 + 1/8 + 1/8/8 = 1,140625. suspiciously close to the 1.15 they mentioned. although i'm probably way off, it may give someone some clue.
You can even simply raster all the points, you needent raytrace or raycast these days, since video cards can project obscene amounts of points. You use an octree because its a cube shape, continually dividing making smaller and smaller cubes. (voxels) I have an engine on the way now using a rastered technique and its looking good. For those that say you cant animate voxels I think they really havent thought much about the topic, of course its possible. As I see it, making the world is alot like "infinite 3d-coat" so look up 3d coat and the level design will be very similar to the way this program works. Main draw backs involve streaming speed not being fast enough, the raytracing or rastering not quite making 60 fps, and plotting the actual voxel objects is very computationally expensive, at the moment I can plot a 1024x1024x1024 sphere in about 12 seconds, But all these problems could be remedied, its an exciting future. My maximum world size at the moment is a meg by a meg by a meg, but I actually might make it 8 times bigger than this. Of course the other problem which is actually quite serious is it takes about 100 meg to store a 8192x8192x8192 character even after compression, so an environment will be even more than this. Even though, saying your going to have characters 8192x8192x8192 is completely absurd compared to what we see in games today... an entire world used to be 8192x8192x8192 :)
How you do it by only storing bits per pointer is the pointers are constructed at runtime in video memory... get your mind around that and you could have your own engine. :)

Resources