what parameters of CIVignette mean - core-image

I check CIVignette of Core Image Filter Reference at
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/filter/ci/CIColorControls
and play around a with the parameters:
inputRadius
inputIntensity
and still have not exactly understood what each parameter effects. Could please someone explain?

Take a look at wiki understand what vignetting in photography means.
It is the fall of of light starting from the center of an image towards the corner.
Apple does not explain much about the the params.
obviously the radius specifies somehow where the vignetitting starts
the param intensity i expect to be how fast the light goes down after vignetting starts.
The radius may not be given in points, a value of 1.0 relates to your picture size.

Intensity is definitely something like 1 to 10 or larger number. 1 has some effects, 10 is rather dark already.
The radius seems to be in pixel (or points). I use a portion of image size (says 1/10th of width) and the effect is pretty good! However, if the intensity is strong (says 10), the radius can be small (like 1) and you can still see the different.

Turns out there is an attributes property on CIFilter that explains its properties and ranges.
let filter = CIFilter(name: "CIVignette")!
print("\(filter.attributes)")
Generates the following output:
[
"CIAttributeFilterDisplayName": Vignette,
"CIAttributeFilterCategories": <__NSArrayI 0x6000037020c0>(
CICategoryColorEffect,
CICategoryVideo,
CICategoryInterlaced,
CICategoryStillImage,
CICategoryBuiltIn
),
"inputRadius": {
CIAttributeClass = NSNumber;
CIAttributeDefault = 1;
CIAttributeDescription = "The distance from the center of the effect.";
CIAttributeDisplayName = Radius;
CIAttributeMax = 2;
CIAttributeMin = 0;
CIAttributeSliderMax = 2;
CIAttributeSliderMin = 0;
CIAttributeType = CIAttributeTypeScalar;
},
"CIAttributeFilterName": CIVignette,
"inputImage": {
CIAttributeClass = CIImage;
CIAttributeDescription = "The image to use as an input image. For filters that also use a background image, this is the foreground image.";
CIAttributeDisplayName = Image;
CIAttributeType = CIAttributeTypeImage;
},
"inputIntensity": {
CIAttributeClass = NSNumber;
CIAttributeDefault = 0;
CIAttributeDescription = "The intensity of the effect.";
CIAttributeDisplayName = Intensity;
CIAttributeIdentity = 0;
CIAttributeMax = 1;
CIAttributeMin = "-1";
CIAttributeSliderMax = 1;
CIAttributeSliderMin = "-1";
CIAttributeType = CIAttributeTypeScalar;
},
"CIAttributeFilterAvailable_Mac": 10.9,
"CIAttributeFilterAvailable_iOS": 5,
"CIAttributeReferenceDocumentation": http://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIVignette
]
inputRadius is a float between 0 and 2 that affects the 'size' of the shadow.
inputIntensity is a float between -1 and 1 that affects the 'darkness' of the filter.

Related

Changing opacity of individual items in openGL ES 2.0 Quad Batch

Overview
In my app (which is a game), I make use of the batching of items to reduce the number of draw calls. So, I'll, create for example, a Java object called platforms which is for all the platforms in the game. All the enemies are batched together as are all collectible items etc....
This works really well. At present I am able to size and position the individual items in a batch independently of each other however, I've come to the point where I really need to change the opacity of individual items also. Currently, I can change only the opacity of the entire batch.
Batching
I am uploading the vertices for all items within the batch that are to be displayed (I can turn individual items off if I don't want them to be drawn), and then once they are all done, I simply draw them in one call.
The following is an idea of what I'm doing - I realise this may not compile, it is just to give an idea for the purpose of the question.
public void draw(){
//Upload vertices
for (count = 0;count<numOfSpritesInBatch;count+=1){
vertices[x] = xLeft;
vertices[(x+1)] = yPTop;
vertices[(x+2)] = 0;
vertices[(x+3)] = textureLeft;
vertices[(x+4)] = 0;
vertices[(x+5)] = xPRight;
vertices[(x+6)] = yTop;
vertices[(x+7)] = 0;
vertices[(x+8)] = textureRight;
vertices[x+9] = 0;
vertices[x+10] = xLeft;
vertices[x+11] = yBottom;
vertices[x+12] = 0;
vertices[x+13] = textureLeft;
vertices[x+14] = 1;
vertices[x+15] = xRight;
vertices[x+16] = yTop;
vertices[x+17] = 0;
vertices[x+18] = textureRight;
vertices[x+19] = 0;
vertices[x+20] = xLeft;
vertices[x+21] = yBottom;
vertices[x+22] = 0;
vertices[x+23] = textureLeft;
vertices[x+24] = 1;
vertices[x+25] = xRight;
vertices[x+26] = yBottom;
vertices[x+27] = 0;
vertices[x+28] = textureRight;
vertices[x+29] = 1;
x+=30;
}
vertexBuf.rewind();
vertexBuf.put(vertices).position(0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
GLES20.glUseProgram(iProgId);
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
//Enable Alpha blending and set blending function
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Draw it
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * numOfSpritesInBatch);
//Disable Alpha blending
GLES20.glDisable(GLES20.GL_BLEND);
}
Shaders
String strVShader =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = uMVPMatrix * a_position;\n"+
"v_texCoords = a_texCoords;" +
"}";
String strFShader =
"precision mediump float;" +
"uniform float opValue;"+
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"gl_FragColor *= opValue;"+
"}";
Currently, I have a method in my Sprite class that allows me to change the opacty. For example, something like this:
spriteBatch.setOpacity(0.5f); //Half opacity
This works, but changes the whole batch - not what I'm after.
Application
I need this because I want to draw small indicators when the player destroys an enemy - which show the score obtained from that action. (The type of thing that happens in many games) - I want these little 'score indicators' to fade out once they appear. All the indicators would of course be created as a batch so they can all be drawn with one draw call.
The only other alternatives are:
Create 10 textures at varying levels of opacity and just switch between them to create the fading effect. Not really an option as way too wasteful.
Create each of these objects separately and draw each with their own draw call. Would work, but with a max of 10 of these objects on-screen, I could potentially be drawing using 10 draw calls just for these items - while the game as a whole currently only uses about 20 draw calls to draw a hundreds of items.
I need to look at future uses of this too in particle systems etc.... so I would really like to try to figure out how to do this (be able to adjust each item's opacity separately). If I need to do this in the shader, I would be grateful if you could show how this works. Alternatively, is it possible to do this outside of the shader?
Surely this can be done in some way or another? All suggestions welcome....
The most direct way of achieving this is to use a vertex attribute for the opacity value, instead of a uniform. This will allow you to set the opacity per vertex, without increasing the number of draw calls.
To implement this, you can follow the pattern you already use for the texture coordinates. They are passed into the vertex shader as an attribute, and then handed off to the fragment shader as a varying variable.
So in the vertex shader, you add:
...
attribute float a_opValue;
varying float v_opValue;
...
v_opValue = a_opValue;
...
In the fragment shader, you remove the uniform declaration for opValue, and replace it with:
varying float v_opValue;
...
gl_FragColor *= v_opValue;
...
In the Java code, you extend the vertex data with an additional value for the opacity, to use 6 values per vertex (3 position, 2 texture coordinates, 1 opacity), and update the state setup accordingly:
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
vertexBuf.position(5);
GLES20.glVertexAttribPointer(iOpValue, 1, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iOpValue);

Converting images into a linear color palette with JS, losing colors

The project in question: https://github.com/matutter/Pixel2 is a personal project to replace some out of date software at work. What it should do is, the user adds an image and it generates a color palette of the image. The color palette should have no duplicate colors. (thats the only important stuff)
My question is: why do larger or hi-res or complex images not work as well? (loss of color data)
Using dropzone.js I have the user put a picture on the page. The picture is a thumbnail. Next I use jquery to find the src out of a <img src="...">. I pass that src to a function that does this
function generate(imgdata) {
var imageObj = new Image();
imageObj.src = imgdata;
convert(imageObj); //the function that traverses the image data pulling out RGB
}
the "convert" function pulls out the data fairly simply by
for(var i=0, n=data.length; i<n; i+=4, pixel++ ) {
r = data[i];
g = data[i+1];
b = data[i+2];
color = r + g + b; // format is a string of **r, g, b**
}
finally, the last part of the main algorithme filters out duplicate colors, I only want just 1 occurrence of each... here's the last part
color = monoFilter(color); // the call
function monoFilter(s) {
var unique = [];
$.each(s, function(i, el){
if($.inArray(el, unique) === -1) unique.push(el);
});
unique.splice(0,1); //remove undefine
unique.unshift("0, 0, 0"); //make sure i have black
unique.push("255, 255, 255"); //and white
return unique;
}
I'm hoping someone can help me identify why there is such a loss of color data in big files.
If anyone is actually interesting enough to look at the github, the relivent files are js/pixel2.js, js/dropzone.js, and ../index.html
This is probably the cause of the problem:
color = r + g + b; // format is a string of **r, g, b**
This simply adds the numbers together and the more pixels you have the higher risk you run to get the same number. For example, these colors generate the same result:
R G B
color = 90 + 0 + 0 = 90;
color = 0 + 90 + 0 = 90;
color = 0 + 0 + 90 = 90;
even though they are completely different colors.
To avoid this you can do it like this if you want a string:
color = [r,g,b].join();
or you can create an integer value of them (which is faster to compare with than a string):
color = (b << 16) + (g << 8) + r; /// LSB byte-order
Even an Euclidean vector would be better:
color = r*r + g*g + b*b;
but with the latter you risk eventually the same scenario as the initial one (but useful for nearest color scenarios).
Anyways, hope this helps.
"The problem was that I wasn't accounting for alpha. So a palette from an image that uses alpha would have accidental duplicate records."
I figured this out after finding this Convert RGBA color to RGB

Best way to serve / produce silhoutte of the US States?

I'm responsible for delivering pages to display primary results for the US elections State by State. Each page needs a banner with an image of the State, approx 250px by 250px. Now all I need to do is figure out how to serve / generate those images...
I've dug into the docs / examples for Protovis and think I
could probably lift the State coordinate outlines- I would have to
manually transform the coordinate data to be justified and sized
properly (ick)
At the other end of the clever/brute spectrum is an enormous sprite
or series of sprites. Even with png 8 compression the file size of
a grid of 50 non-overlapping 250x250px sprites is a concern, and
sadly such a file doesn't seem to exist so I'd have to create it
from hand. Also unpleasant.
Who's got a better idea?
Answered: the right solution is to switch to d3.
What we hacked in for now:
drawStateInBox = function(box, state, color) {
var w = $("#" + box).width(),
h = $("#" + box).height(),
off_x = 0,
off_y = 0;
borders = us_lowres[state].borders;
//Preserve aspect ratio
delta_lat = pv.max(borders[0], function(b) b.lat) - pv.min(borders[0], function(b) b.lat);
delta_lng = pv.max(borders[0], function(b) b.lng) - pv.min(borders[0], function(b) b.lng);
if (delta_lat / h > delta_lng / w) {
scaled_h = h;
scaled_w = w * delta_lat / delta_lng;
off_x = (w - scaled_w) / 2;
} else {
scaled_h = h * delta_lat / delta_lng;
scaled_w = w;
off_y = (h - scaled_h) / 2;
}
var scale = pv.Geo.scale()
.domain(us_lowres[state].borders[0])
.range({x: off_x, y: off_y},
{x: scaled_w + off_x, y: scaled_h + off_y});
var vis = new pv.Panel(state)
.canvas(box)
.width(w)
.height(h)
.data(borders)
.add(pv.Line)
.data(function(l) l)
.left(scale.x)
.top(scale.y)
.fillStyle(function(d, l, c) {
return(color);
})
.lineWidth(0)
.strokeStyle(color)
.antialias(false);
vis.render();
};
d3 seems to have the capability to do maps similar to what you want. The example shows both counties and states so you would just omit the counties and then provide the election results in the right format.
There is a set of maps on 50states.com, e.g. http://www.50states.com/maps/alabama.htm, which is about 5KB. Roughly, then, that's 250KB for the whole set. Since you mention using these separately, there's your answer.
Or are you doing more with this than just showing the outline?

How to compute the visible area based on a heightmap?

I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).

Calculate a color fade

Given two colors and n steps, how can one calculate n colors including the two given colors that create a fade effect?
If possible pseudo-code is preferred but this will probably be implemented in Java.
Thanks!
Divide each colour into its RGB components and then calculate the individual steps required.
oldRed = 120;
newRed = 200;
steps = 10;
redStepAmount = (newRed - oldRed) / steps;
currentRed = oldRed;
for (i = 0; i < steps; i++) {
currentRed += redStepAmount;
}
Obviously extend that for green and blue.
There are two good related questions you should also review:
Generating gradients programatically?
Conditional formatting — percentage to color conversion
Please note that you're often better off doing this in the HSV color space rather than RGB - it generates more pleasing colors to the human eye (lower chance of clashing or negative optical properties).
Good luck!
-Adam
If you want a blend that looks anything like most color picker GUI widgets, you really want to translate to HSL or HSV. From there, you're probably fine with linear interpolation in each dimension.
Trying to do any interpolations directly in RGB colorspace is a bad idea. It's way too nonlinear (and no, gamma correction won't help in this case).
For those looking for something they can copy and paste. Made a quick function for RGB colors. Returns a single color that is the amount of ratio closer to rgbColor2.
function fadeToColor(rgbColor1, rgbColor2, ratio) {
var color1 = rgbColor1.substring(4, rgbColor1.length - 1).split(','),
color2 = rgbColor2.substring(4, rgbColor2.length - 1).split(','),
difference,
newColor = [];
for (var i = 0; i < color1.length; i++) {
difference = color2[i] - color1[i];
newColor.push(Math.floor(parseInt(color1[i], 10) + difference * ratio));
}
return 'rgb(' + newColor + ')';
}
The quesiton is what transformation do you want to occur? If you transpose into the HSV colourspace and given
FF0000 and 00FF00
It will transition from red through yellow to green.
However, if you define "black" or some other shade as being the mid-point of the blend, you have to shade to that colour first ff0000->000000->00ff00 or via white : ff0000 -> ffffff -> 00ff00.
Transforming via HSV however can be fun because you have to use a bit of trig to map the circular map into the vector components.
The easiest thing to do is linear interpolation between the color components (see nickf's response). Just be aware that the eye is highly nonlinear, so it won't necessarily look you're making even steps. Some color spaces attempt to address this (CIE maybe?), so you might want to transform into another color space first, interpolate, then transform back to RGB or whatever you're using.
How about this answer
- (UIColor *)colorFromColor:(UIColor *)fromColor toColor:(UIColor *)toColor percent:(float)percent
{
float dec = percent / 100.f;
CGFloat fRed, fBlue, fGreen, fAlpha;
CGFloat tRed, tBlue, tGreen, tAlpha;
CGFloat red, green, blue, alpha;
if(CGColorGetNumberOfComponents(fromColor.CGColor) == 2) {
[fromColor getWhite:&fRed alpha:&fAlpha];
fGreen = fRed;
fBlue = fRed;
}
else {
[fromColor getRed:&fRed green:&fGreen blue:&fBlue alpha:&fAlpha];
}
if(CGColorGetNumberOfComponents(toColor.CGColor) == 2) {
[toColor getWhite:&tRed alpha:&tAlpha];
tGreen = tRed;
tBlue = tRed;
}
else {
[toColor getRed:&tRed green:&tGreen blue:&tBlue alpha:&tAlpha];
}
red = (dec * (tRed - fRed)) + fRed;
green = (dec * (tGreen - fGreen)) + fGreen;
blue = (dec * (tBlue - fBlue)) + fBlue;
alpha = (dec * (tAlpha - fAlpha)) + fAlpha;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}

Resources