Does anyone know I can get all frames of a layer with their pixels with ExtendScript js? I got the selected layer and I want to be able to loop through the frames and compare the pixels of them. But I can't find a property called frames or so on the layer object.
There is no direct access to frames in the API. Frames aren't really a thing, basically you have Comps, Layers, and Properties.
If you want to examine the pixels of a layer there is no way to do that within the API either, apart from adding an expression that uses the sampleImage() expression method, and getting the value it returns. To do this for every pixel would be glacially slow.
Think of scripting as automating the UI. There's pretty much nothing you can do with scripts that you couldn't achieve as a user using the program, just much slower. To access and change the pixels in a layer you really need to be using the SDK and C++, not extendscript.
Related
I'm trying to filter/clip a raster provided by a Web Map Service from Geoserver.
I dont want to clip based on a polygon; I want to filter based on a value such that all raster pixels below this value are black or transparent.
As far as I know I can not use cql_filter since its only for WFS Feature Requests and not for WMS Raster images.
Maybe you have an idea how to solve my question.
A WMS does not return you data, it returns a picture of your data.
So there are two possibilities:
You use a Web Coverage Service (WCS) request which will return you the actual data in your raster. Think of it as like a WFS for rasters. The GeoServer manual covers the mechanics of making a request and there is a request builder under Demos for you to experiment. I'm pretty sure this will only allow you to subset by rectangular areas, though your client is free to do more complex operations of the returned data.
You can consider this a styling exercise in which case it should be possible to set up an SLD style that performs the operation that you need. You will need to use a spatial filter to clip the underlying raster and apply two rules depending on if it is true or false. You can even pass the polygon in as an environment variable from the request if you need.
I have an application which renders many filled polygons with OpenGL, in 2D. Filling is done by tesselation but performance is not optimal. 1900 polygons made up of 122000 vertex (that is, about 64 vertex per polygon) are displayed in about 3 seconds.
Apparently, the CPU is not the bottleneck, as if I replace calls to gluTessVertex by calls to glColor - just to test where is the bottleneck, performance is doubled.
I have the same problem with loading many small textures.
Now, which are the options to improve the performance? Seems that most time is spend in the geometry subsystem. Rendering is fast enough.
I already have a worker thread which does the load (so tesselation, texture binding) in one context, and another thread which does the draw in another context. The two contexts share objects via wglShareLists and it works like a charm.
Can I have a third thread in a third context which would handle also tesselation for half of the polygons? Anyone tried that? Is it safe? Any example of sharing objects between three contexts?
Forgot to say, I have an ATI Radeon HD 4550 graphics card, suppose it can handle more than 39kB/s of data.
Increase Performance
Sounds like you're using the old fixed-function pipeline.
If you're unsure of what that is, well, the following functions are a part of the fixed-function pipeline.
glBegin()
glEnd()
glVertex*()
glTexCoord*()
glNormal*()
glColor*()
etc.
Those functions are old and render geometry immediately. That means that each time you call the above functions, that geometry gets send to the GPU. By doing that a lot of times, you can easy make the FPS go way under 60 just by rendering simple things.
Now you need to use buffers and to be more precise VAOs with/or VBOs (and IBOs).
VBO or Vertex Buffer Object, is a buffer which can store vertices which you then can render. This is much much faster and better to use than glBegin() and glEnd(). When you create a VBO you supply it with vertices and they only require to be send to the GPU once, that's basically why they are fast, because they already are in the GPU and only require a single draw call instead of multiple.
The reason I said "with/or" is because in the newer versions you need to create a VAO which then would use a VBO, where before you could simply render the VBOs.
Tessellation
There are multiple ways to do tessellation and things which look like/would give the effect of tessellation.
For instance you could also simply render different models according to the required LOD (Level of Detail), thereby when you're up close to an object you then render the model with all it's details which probably would have a high vertices count. Then the further you're away from the model you simply render another version of that model but which have less vertices, which also equals less detail. Though if you can't really do that on something like terrain and definitely shouldn't do it on something like dynamic terrain and/or procedurally generated terrain.
You can also do actual geometry tessellation and you would do that through a Shader. Since tessellation is a really huge topic I will provide you with 2 urls which both explain and have code on them.
Both of these articles uses modern 4.0/4.0+ OpenGL.
http://prideout.net/blog/?p=48
http://antongerdelan.net/opengl/tessellation.html
Texturing
Generating and binding textures are still the same.
Instead of using gluBuild2DMipmaps() you can use glGenerateMipmap(GL_TEXTURE_2D); it was added in OpenGL version 3.0'ish if I remember correctly.
Again you can (and should) change all you glBegin() - glEnd() (and everything in between) calls out with VAOs and VBOs. You can store everything you want inside a buffer vertices, texture coordinates, normals, colors, etc. You can store the things in separate buffers or you can store them inside a single buffer, usually called an Interleaved Buffer or Interleaved VBO.
You wouldn't be needing glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) anymore, because you do that within a Shader, you bind textures and use them in a Shader, and since you create the Shader Program you can make it act however you want it to.
I'm currently designing a game using Cocos2d. There's no code yet, as I'm still developing my ideas. But, I've run across a question I can't answer and want to know if I'm just missing something or what? Here's what I'm currently thinking:
I am "dropping" multiple blocks from the top of the screen and they move down the screen in random directions. They will eventually settle at the bottom of the screen and stack up one on top of the other. Eventually, while falling, some blocks are going to collide with others. When two blocks collide I want to test to see if certain characteristics of each block are equal (e.g. size, color, orientation, etc.). Each block is it's own object, will handle it's own movement and collision detection, and will have accessor methods for size, color, orientation, etc.
Here's my question:
Am I correct in thinking that each block is a separate unit in itself and doesn't know anything about the other blocks? Block A, for instance, collides with Block B and only knows that it collided with something, but doesn't know it was another block? If this is so, then how do I do a proper comparison? How do I tell which block has collided with which block and get access to each block's data and where do I do the comparison? In the layer?
I'd love to be pointed in a decent direction here. I'm not really sure if what I'm wanting to do is even doable? Any suggestions?
You could use a physics engine that usually comes along with cocos2d- either chipmunk or box2d. The physics engines will take care of collisions for you, and if you implement collision callbacks then you can know when two objects hit each other. You can then check the characteristics of each object and react accordingly. This tutorial on Chipmunk and cocos2d integration might be helpful.
Just have a question for anyone out there who knows some sort of game engine pretty well. What I am trying to implement is some sort of script or code that will allow me to make a custom game character and textures mid-game. A few examples would be along the lines of changing facial expressions and body part positions in the game SecondLife. I don't really need a particular language, feel free to use your favorite, I'm just really looking for an example on how to go about this.
Also I was wondering if there is anyway to combine textures for optimization; for example if i wanted to add a tattoo to a character midgame, is there any code that could combine his body texture and the tattoo texture into one texture to use (this way I can simply just render one texture per body.)
Any tips would be appreciated, sorry if the question is a wee bit to vauge.
I think that "swappable tattoos" are typically done as a second render pass of the polygons. You could do some research into "detail maps" and see if they provide what you're looking for.
As for actually modifying the texture data at runtime, all you need to do is composite the textures into a new one. You could even use the rendering API to do it for you, more than likely; render the textures you want to combine in the order you want to combine them into a new texture. Mind, doing this every frame would be a disoptimization since it'll be slower to render two textures into one and then draw the new one than it would be just to draw the two sources one after the other.
I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!