I want to create a colour scroller effect. I have a function that I give it RGB values (eg. setColor(189,234,45)) and I want to change the colour rapidly but I don't want to get many repeats to create an effect of scrolling through the colours.
I have tried tried the following but it doesn't quite generate the effect that I am looking for.
for (int i = 0; i < 256; i++) {
for (int j = 0; j < 256; j++) {
for (int k = 0; k < 256; k++) {
setColor(i, j, k);
}
}
}
I wanted to know if anyone knows how the colour scroller's colours are arranged next to each other. The arrangement I am looking for looks like the scroll on the right.
The colors you are working with are represented as R,G,B (red green blue) values. However, another
way to think about color is hue, saturation, value. In the scroll image you are trying to emulate,
it is the hue that is changing - the saturation and value (brightness) are unaffected.
Here is a function that happens to make a hue-cycle gradient like the one in the image you linked to:
int n = 256; // number of steps
float TWO_PI = 3.14159*2;
for (int i = 0; i < n; ++i) {
int red = 128 + sin(i*TWO_PI/n + 0) + 127;
int grn = 128 + sin(i*TWO_PI/n + TWO_PI/3) + 127;
int blu = 128 + sin(i*TWO_PI/n + 2*TWO_PI/3) + 127;
setColor(red, grn, blu);
}
To understand how that function works, I recommend that you read my color tutorial that GreenAsJade linked to.
However, that kind of gradient function isn't quite what you need, because you want to start from a particular color you are passing in, and then go to the next color in the sequence. It's much easier to do this kind of thing if you represent your colors as HSV triplets (or HSB triplets), instead of RGB triplets. Then you can manipulate just the hue component, and get those kind of rainbow effects. In helps to have a set of function that can convert from RGB to HSV and back again.
This site contains a bunch of color conversion source code, including the ones you need for those conversions. Using the two conversion functions supplied on that page, your code might look like:
void cycleMyColor(int *r, int *g, int *b) {
float h,s,v, fr,fg,fb;
RGBtoHSV(*r/255.0,*g/255.0,*b/255.0,&h,&s,&v);
h += 1/256.0; // increment the hue here
h -= (int) h; // and cycle around if necessary
HSVtoRGB(&fr,&fg,&fb,h,s,v);
*r = fr*255; *g = fg*255; *b = fb*255;
setColor(*r,*g,*b);
}
This code is a little more complicated than it needs to be because the color conversions on that site use floating point color components that go from 0-1, instead of integers that go from 0-255, as you were using, so I'm spending a few lines converting between those two representations. You may find it simpler to just keep your color in HSB space, and then convert to RGB when you want to display it.
As you mentioned in your edit, you don't like the sequence of colours, because you start from black an end at white, instead of starting at one end of the rainbow and going to the other.
So you are going to need to work out a sequence of RGB that goes from blue through green and yellow to red. That means you need to start with (0,0,255) and end at (255, 0, 0), and don't pass through (255,255,255) or (0,0,0) - in a nutshell, that's how its done.
There are many ways you could do this and get a pleasing effect - beyond the scope of an answer here. This article explores it in depth:
http://krazydad.com/tutorials/makecolors.php
Related
I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}
The project in question: https://github.com/matutter/Pixel2 is a personal project to replace some out of date software at work. What it should do is, the user adds an image and it generates a color palette of the image. The color palette should have no duplicate colors. (thats the only important stuff)
My question is: why do larger or hi-res or complex images not work as well? (loss of color data)
Using dropzone.js I have the user put a picture on the page. The picture is a thumbnail. Next I use jquery to find the src out of a <img src="...">. I pass that src to a function that does this
function generate(imgdata) {
var imageObj = new Image();
imageObj.src = imgdata;
convert(imageObj); //the function that traverses the image data pulling out RGB
}
the "convert" function pulls out the data fairly simply by
for(var i=0, n=data.length; i<n; i+=4, pixel++ ) {
r = data[i];
g = data[i+1];
b = data[i+2];
color = r + g + b; // format is a string of **r, g, b**
}
finally, the last part of the main algorithme filters out duplicate colors, I only want just 1 occurrence of each... here's the last part
color = monoFilter(color); // the call
function monoFilter(s) {
var unique = [];
$.each(s, function(i, el){
if($.inArray(el, unique) === -1) unique.push(el);
});
unique.splice(0,1); //remove undefine
unique.unshift("0, 0, 0"); //make sure i have black
unique.push("255, 255, 255"); //and white
return unique;
}
I'm hoping someone can help me identify why there is such a loss of color data in big files.
If anyone is actually interesting enough to look at the github, the relivent files are js/pixel2.js, js/dropzone.js, and ../index.html
This is probably the cause of the problem:
color = r + g + b; // format is a string of **r, g, b**
This simply adds the numbers together and the more pixels you have the higher risk you run to get the same number. For example, these colors generate the same result:
R G B
color = 90 + 0 + 0 = 90;
color = 0 + 90 + 0 = 90;
color = 0 + 0 + 90 = 90;
even though they are completely different colors.
To avoid this you can do it like this if you want a string:
color = [r,g,b].join();
or you can create an integer value of them (which is faster to compare with than a string):
color = (b << 16) + (g << 8) + r; /// LSB byte-order
Even an Euclidean vector would be better:
color = r*r + g*g + b*b;
but with the latter you risk eventually the same scenario as the initial one (but useful for nearest color scenarios).
Anyways, hope this helps.
"The problem was that I wasn't accounting for alpha. So a palette from an image that uses alpha would have accidental duplicate records."
I figured this out after finding this Convert RGBA color to RGB
I'm trying to create a simplified hue/saturation picker for cocos2d. I want to create a gradient and to pick from it. I need to recolor a black/white image gradient for every color like blue, red and others. So I need to create many gradients. I know that I should use some blend functions to achieve this.
But I'm still a little bit confused about what is the best way to proceed.
Should I use blend functions at all ?
My problem basically is that I use a gradient from black to transparent or to white but with
sprite.setColor(color);
I get a gradient from black to the desired color but I need a gradient from the desired darker color to white.
What you need to do is create a 2D gradient that goes from unsaturated to saturated left-to-right, and from dark to light bottom-to-top. I'd do it by creating a new bitmap (or if you're using OpenGL, a texture). I'd then color each pixel using the following pseudocode:
hue = <whatever the user set the hue to>
for (row = 0; row < height; row++)
{
for (col = 0; col < width; col++)
{
sat = col / width;
val = row / height;
rgb = HSVToRGB(hue, sat, value);
setPixel (col, row, rgb);
}
}
I need to process the first "Original" image to get something similar to the second "Enhanced" one. I applied some naif calculation and the new image has more contrast and more strong colors but in the higher color regions a color hole appears. I have no idea about image processing, it would be great if you can suggest me which concepts and/or algorithms I could apply to get the result without this problem.
Convert the image to the HSB (Hue, Saturation, Brightness) color space.
Multiply the saturation by some amount. Use a cutoff value if your platform requires it.
Example in Mathematica:
satMult = 4; (*saturation multiplier *)
imgHSB = ColorConvert[Import["http://i.imgur.com/8XkxR.jpg"], "HSB"];
cs = ColorSeparate[imgHSB]; (* separate in H, S and B*)
newSat = Image[ImageData[cs[[2]]] * satMult]; (* cs[[2]] is the saturation*)
ColorCombine[{cs[[1]], newSat, cs[[3]]}, "HSB"]] (* rebuild the image *)
A table increasing the saturation value:
The "holes" that you see in the processed picture are the darker areas of the original picture, which went to negative values with your darkening algorithm. I suspect these out of range values are then written to the new image as positive numbers, so they end up in the higher part of the brightness scale. For example, let's say a pixel value is 10, and you are substracting 12 from all pixels to darken them a bit. This pixel will underflow and become -2. When you write it back to the file, -2 gets represented as 0xfe in hex, and this is 254 if you take it as an unsigned number.
You should use an algorithm that keeps the pixel values within the valid range, or at least you should "clamp" the values to the valid range. A typical clamp function defined as a C macro would be:
#define clamp(p) (p < 0 ? 0 : (p > 255 ? 255 : p))
If you add the above macro to your processing function it will take care of the "holes", but instead you will now have dark colors in those places.
If you are ready for something a bit more advanced, here on Wikipedia they have the brightness and contrast formulas that are used by The GIMP. These which will do a pretty good job with your image if you choose the proper coefficients.
This wikipedia article does a good job of explaining histogram equalization for contrast enhancement.
Code for grayscale images:
unsigned char* EnhanceContrast(unsigned char* data, int width, int height)
{
int* cdf = (int*) calloc(256, sizeof(int));
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
int val = data[width*y + x];
cdf[val]++;
}
}
int cdf_min = cdf[0];
for(int i = 1; i < 256; i++) {
cdf[i] += cdf[i-1];
if(cdf[i] < cdf_min) {
cdf_min = cdf[i];
}
}
unsigned char* enhanced_data = (unsigned char*) malloc(width*height);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
enhanced_data[width*y + x] = (int) round(cdf[data[width*y + x]] - cdf_min)*255.0/(width*height-cdf_min);
}
}
free(cdf);
return enhanced_data;
}
Given two colors and n steps, how can one calculate n colors including the two given colors that create a fade effect?
If possible pseudo-code is preferred but this will probably be implemented in Java.
Thanks!
Divide each colour into its RGB components and then calculate the individual steps required.
oldRed = 120;
newRed = 200;
steps = 10;
redStepAmount = (newRed - oldRed) / steps;
currentRed = oldRed;
for (i = 0; i < steps; i++) {
currentRed += redStepAmount;
}
Obviously extend that for green and blue.
There are two good related questions you should also review:
Generating gradients programatically?
Conditional formatting — percentage to color conversion
Please note that you're often better off doing this in the HSV color space rather than RGB - it generates more pleasing colors to the human eye (lower chance of clashing or negative optical properties).
Good luck!
-Adam
If you want a blend that looks anything like most color picker GUI widgets, you really want to translate to HSL or HSV. From there, you're probably fine with linear interpolation in each dimension.
Trying to do any interpolations directly in RGB colorspace is a bad idea. It's way too nonlinear (and no, gamma correction won't help in this case).
For those looking for something they can copy and paste. Made a quick function for RGB colors. Returns a single color that is the amount of ratio closer to rgbColor2.
function fadeToColor(rgbColor1, rgbColor2, ratio) {
var color1 = rgbColor1.substring(4, rgbColor1.length - 1).split(','),
color2 = rgbColor2.substring(4, rgbColor2.length - 1).split(','),
difference,
newColor = [];
for (var i = 0; i < color1.length; i++) {
difference = color2[i] - color1[i];
newColor.push(Math.floor(parseInt(color1[i], 10) + difference * ratio));
}
return 'rgb(' + newColor + ')';
}
The quesiton is what transformation do you want to occur? If you transpose into the HSV colourspace and given
FF0000 and 00FF00
It will transition from red through yellow to green.
However, if you define "black" or some other shade as being the mid-point of the blend, you have to shade to that colour first ff0000->000000->00ff00 or via white : ff0000 -> ffffff -> 00ff00.
Transforming via HSV however can be fun because you have to use a bit of trig to map the circular map into the vector components.
The easiest thing to do is linear interpolation between the color components (see nickf's response). Just be aware that the eye is highly nonlinear, so it won't necessarily look you're making even steps. Some color spaces attempt to address this (CIE maybe?), so you might want to transform into another color space first, interpolate, then transform back to RGB or whatever you're using.
How about this answer
- (UIColor *)colorFromColor:(UIColor *)fromColor toColor:(UIColor *)toColor percent:(float)percent
{
float dec = percent / 100.f;
CGFloat fRed, fBlue, fGreen, fAlpha;
CGFloat tRed, tBlue, tGreen, tAlpha;
CGFloat red, green, blue, alpha;
if(CGColorGetNumberOfComponents(fromColor.CGColor) == 2) {
[fromColor getWhite:&fRed alpha:&fAlpha];
fGreen = fRed;
fBlue = fRed;
}
else {
[fromColor getRed:&fRed green:&fGreen blue:&fBlue alpha:&fAlpha];
}
if(CGColorGetNumberOfComponents(toColor.CGColor) == 2) {
[toColor getWhite:&tRed alpha:&tAlpha];
tGreen = tRed;
tBlue = tRed;
}
else {
[toColor getRed:&tRed green:&tGreen blue:&tBlue alpha:&tAlpha];
}
red = (dec * (tRed - fRed)) + fRed;
green = (dec * (tGreen - fGreen)) + fGreen;
blue = (dec * (tBlue - fBlue)) + fBlue;
alpha = (dec * (tAlpha - fAlpha)) + fAlpha;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}