In a shader (using OpenGL ES 2.0) I want to have an array with a dynamic size.
I can declare an array with fixed size:
uniform vec2 vertexPositions[4];
But now I want to set the size dynamic to the number of points, which I will pass.
I thought about making a string replacement in the shader source before compiling it, but than I have to compile it everytime I draw a different element. That seems to be CPU-intensive.
The typical approach would be to size the uniform array to the maximum number of elements you expect to use, and then only update the subset of it that you're actually using. You can then pass in the effective size of the array as a separate uniform.
uniform vec2 arr[MAX_SIZE];
uniform int arr_size;
Related
I have to design a GLSL code the handle some unusual calculations. The scene contains 2048 vertex and the position is predefined. While there are 50 vec3 for each vertex as attribute and will be used in the vertex shader. Since the max attribute location for each vertex is typically limited to be 16, so how can I specify such amount of data for vertex?
Maybe texture lookup in vertex shader is one way to solve this problem, but I am not sure whether it is the optimal way.
I need to read PLY files (Stanford Triangle Format) with embedded texture for some purpose. I saw several specification of PLY files, but could not find a single source specifying the syntax for texture mapping. There seems to be so many libraries which reads PLY file, but most of them seems not to support texture (they just crashes; I tried 2-3 of them).
Following is in the header for a ply file with texture:
ply
format binary_little_endian 1.0
comment TextureFile Parameterization.png
element vertex 50383
property float x
property float y
property float z
property float nx
property float ny
property float nz
element face 99994
property list uint8 int32 vertex_index
property list uint8 float texcoord
end_header
What I don't understand is the line property list uint8 float texcoord. Also the list corresponding to a face is
3 1247 1257 1279 6 0.09163 0.565323 0.109197 0.565733 0.10888 0.602539 6 9 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157`.
What is this list; what is the format? While I understand that PLY gives you the opportunity to define your own properties for the elements, but the handling textures seems to be pretty much a standard and quite a few applications (like the popular Meshlab) seems to open textured PLY files using the above syntax.
I want to know what is the standard syntax followed for reading textured PLY files and if possible the source from where this information is found.
In PLY files faces often contain lists of values and these lists can vary in size. If it's a triangular face, expect three values, a quad = 4 and so on up to any arbitrary n-gon. A list is declared in a line like this:
property list uint8 int32 vertex_index
This is a list called 'vertex_index'. It will always consist of an 8-bit unsigned integer (that's the uint8) that is the size N, followed by N 32-bit integers (that's the int32).
In the example line this shows up right away:
3 1247 1257 1279
This says, here comes 3 values and then it gives you the three.
Now the second list is where the texture coordinates should be:
property list uint8 float texcoord
It's just like the first list in that the size comes first (as an unsigned byte) but this time it will be followed by a series of 32-bit floats instead of integers (makes sense for texture coordinates). The straightforward interpretation is that there will be a texture coordinate for each of the vertices listed in vertex_index. If we assume these are just 2d texture coordinates (a pretty safe assumption) we should expect to see the number 6 followed by 6 floating point values ... and we do:
6 0.09163 0.565323 0.109197 0.565733 0.10888 0.602539
These are the texture coordinates that correspond with the three vertices already listed.
Now, for a face, that should be it. I don't know what the stuff is on the rest of the line. According to your header the rest of the file should be binary so I don't know how you got it as a line of ascii text but the extra data on that line shouldn't be there (also according to the header which fully defines a face).
Let me add to #OllieBrown's response, as further info for anyone coming across this, that the format above uses per-face texture coordinates, also called wedge UVs. What this means is that if you are sharing vertices, there is a chance that a shared vertex(basically a vert index being used in multiple adjacent triangles), might have different UVs based on the triangle it takes part in. That usually happens when a vertex is on a UV seam or where UVs meet the texture borders. Typically that means duplicating vertices since GPUs require per-vertex attributes. So a shared vertex ends up as X vertices overlapping in space(where X is the number of triangles they are shared by), but have different UVs based on the triangle they take part in. One advantage to keeping data like that on disk is that since this is a text format, it reduces the amount of text you need, therefore reduced disk size. OBJ has that as well, although it keeps a flat UV array and uses indexing into that array instead, regardless of whether it's per-vertex or per-face UVs.
I also can't figure out what the 6 9 <9*0.992157> part is (although the 9 part seems like 3 vector3s which have the same value for all 3 axes), but Paul Bourke's code here has this description of the setup_other_props function:
/******************************************************************************
Make ready for "other" properties of an element-- those properties that
the user has not explicitly asked for, but that are to be stashed away
in a special structure to be carried along with the element's other
information.
Entry:
plyfile - file identifier
elem - element for which we want to save away other properties
******************************************************************************/
void setup_other_props(PlyFile *plyfile, PlyElement *elem)
From what I understand, it's possible to keep data that are not part of the header, per element. These data are supposed to be kept and stored, but not interpreted for use in every application. Bourke's description of the format speaks about backwards compatibility with older software, so this might be a case of a custom format that only some applications understand but the extra info shouldn't hinder an older application that doesn't need them from understanding and/or rendering the content.
I want to set different blurFilter.texelSpacingMultiplier for different regions in image in GPUImageCannyEdgeDetection filter is there a way to do that.
The texelSpacingMultiplier is defined as a uniform in the fragment shaders used for this operation. That will remain constant across the image.
If you wished to have this vary in parts of the image, you will need to create a custom version of this operation and its sub-filters that takes in a varying value for this per-pixel.
Probably the easiest way to do this would be to have your per-pixel values for the multiplier be encoded into a texture that would be input as a secondary image. This texture could be read from within the fragment shaders and the decoded value from the RGBA input converted into a floating point value to set this multiplier per-pixel. That would allow you to create a starting image (drawn or otherwise) that would be used as a mask to define how this is applied.
It will take a little effort to do this, since you will need to rewrite several of the sub-filters used to construct the Canny edge detection implementation here, but the process itself is straightforward.
I am trying to use HLSL code as a basis for an experiment, but I don't understand what uv.zw represent? It shows:
float4 uv0 : TEXCOORD0
...
uv0.zw;
Isn't uv only 2? I know uvw supports 3 but what's the fourth component? Alpha?
In the online examples, I could only found TEXCOORD0 used for float2 values, not float4.
Textures can be 3D, so texture coordinates can have a third dimension, z.
If you're familiar with homogeneous coordinates, you'll know that one way of representing a variety of transformations on a 3D coordinate is 4D via homogeneous coordinates, which adds a "w" coordinate.
All values in GPU are actually float4's behind the scenes -- declaring things float or float2 etc merely restrict the # of channels that are used.
If a float2 value accesses .zw channels, it's technically undefined but the compiler may accept it. So be cautious.
In HLSL the name "uv" has no intrinsic definition -- you could declare a variable of any type with that name.
I need to calculate the minimum and maximum UV values assigned to the pixels produced when a given object is drawn onscreen from a certain perspective. For example, if I have a UV-mapped cube but only the front face is visible, min(UV) and max(UV) should be set to the minimum and maximum UV coordinates assigned to the pixels of the visible face.
I'd like to do this using Direct3D 9 shaders (and the lowest shader model possible) to speed up processing. The vertex shader could be as simple as taking each input vertex's UV coordinates and passing them on, unmodified, to the pixel shader. The pixel shader, on the other hand, would have to take the values produced by the vertex shader and use these to compute the minimum and maximum UV values.
What I'd like to know is:
How do I maintain state (current min and max values) between invocations of the pixel shader?
How do I get the final min and max values produced by the shader into my program?
Is there any way to do this in HLSL, or am I stuck doing it by hand? If HLSL won't work, what would be the most efficient way to do this without the use of shaders?
1) You don't.
2) You would have to do a read back at some point. This will be a fairly slow process and cause a pipeline stall.
In general I can't think of a good way to do this. What exactly are you trying to acheieve with this? There may be some other way to achieve the result you are after.
You "may" be able to get something going using multiple render targets and writing the UVs for each pixel to the render target. Then you'd need to pass the render target back to main memory and then parse it for your min and max values. This is a realy slow and very ugly solution.
If you can do it as a couple of seperate pass you may be able to render to a very small render target and use 1 pass with a Max and 1 pass with a Min alpha blend op. Again ... not a great solution.