In Three.js is there a simple and straightforward way to change the color of an object after its been rendered? And even better, can you change the RGB composition of a color after it has been assigned to a bunch of lines, without going through each line or child specifically?
We have tried several approaches. First, we assigned colors to line segments,
lineG.vertices.push(new THREE.Vector3(xA,zA,yA))
lineG.vertices.push(new THREE.Vector3(xB,zB,yB))
for (var i = 0; i < 2; i++) lineG.colors.push(colr)
newLine = new THREE.Line(lineG, lineMat)
lines.add(newLine)
and then we changed the composition of the color as the simulation progressed.
blue_one.setRGB(0.0,0.0+a,0.3+a); colr = blue_one
where the value of 'a' cycles through 0.15, 0.3 and 0.45 as the simulation progresses. We found that we were able to change the composition of the color before the first render, but not afterwards in the successive renders.
After that we tried assigning different sets of lines to different objects (although they were part of the same storm runoff pattern) and then changing the colors of each one of the objects, but no cigar there either.
We have tried several approaches.
First, we assigned colors to line segments,
lineG.vertices.push(new THREE.Vector3(xA,zA,yA))
lineG.vertices.push(new THREE.Vector3(xB,zB,yB))
for (var i = 0; i < 2; i++) lineG.colors.push(colr)
newLine = new THREE.Line(lineG, lineMat)
lines.add(newLine)
and then we changed the composition of the color as
the simulation progressed.
blue_one.setRGB(0.0,0.0+a,0.3+a); colr = blue_one
where the value of 'a' cycles through 0.15, 0.3 and 0.45
as the simulation progresses. We found that we were able
to change the composition of the color before the first
render, but not afterwards in the successive renders.
After that we tried assigning different sets of lines
to different objects (although they were part of the
same storm runoff pattern) and then changing the colors
of each one of the objects, but no cigar there either.
Related
I'm having a problem that I need to make the words I took from an external file "NOT" overlap each other. I have over 50 words that have random text sizes and places when you run it but they overlap.
How can I make them "NOT" overlap each other? the result would probably look like a word cloud.
if you think my codes would help here they are
String [] words;
int index = 0;
void setup ()
{
size (500,500);
background (255);
String [] lines = loadStrings ("alice_just_text.txt");
String entireplay = join(lines, " "); //splits it by line
words = splitTokens (entireplay, ",.?!:-;:()03 "); //splits it by word
for (int i = 0; i < 50; i++) {
float x = random(width);
float y = random(height);
int index = int(random(words.length));
textSize (random(60)); //random font size
fill (0);
textAlign (CENTER);
text (words[index], x, y, width/2, height/2);
println(words[index]);
index++ ;
}
}
Stack Overflow isn't really designed for general "how do I do this" type questions. You'll have much better luck if you post a more specific "I tried X, expected Y, but got Z instead" type question. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take on those pieces one at a time.
For example, you can isolate your problem to making sure rectangles don't overlap, which you can break down even further. There are a number of ways to do that:
You could use a grid to lay out your rectangles. Figure out how many squares a line of text takes up, then find a place in your grid where that word will fit. You could use something like a 2D array of boolean values, for example.
Or you could generate a random location, and then check whether there's already a rectangle there. If so, pick a new random location until you find a clear spot.
In any case, you'll probably need to use collision detection (either point-rectangle or rectangle-rectangle) to determine whether your rectangles are overlapping.
Start small. Create a small example program that just shows two rectangles on the screen. Hardcode their positions at first, but make it so they turn red if they're colliding. Work your way up from there. Make it so you can add rectangles using the mouse, but only let the user add them if there is no overlap. Then add the random location choosing. If you get stuck on a specific step, then post a MCVE and we'll go from there. Good luck.
I am working in Processing and I would like to compare the color of 2 the pixels of 2 different images.
let's say we comparing the pixel in position 10
color c1= image1.pixels[10]; color c2= image2.pixels[10];
if(c1==c2) { //so something }
Firstly I was playing with brightnsess
if(brightness(c1)==brightness(c2))
Generally it was working but not exactly as I wanted as the pixels were a little bit similar but not exactly the same color.
if you want to compare colours you are probably better off comparing the three basic ones instead of the actual number that "color" is. Thus instead of
if(c1 == c2)
where you compare two large numbers like 13314249 you can go
if(red(c1) == red(c2) && green(c1) == green(c2) && blue(c1) == blue(c2))
where you compare numbers from 0 - 255, the possible values of red or green or blue you can get from a colour. As for the "little bit similar" colours, you can set a threshold and any difference below that threshold will be considered negligible thus the colours are the same. Something like this:
int threshold = 5
if(abs(red(c1) red(c2)) < threshold && abs(green(c1) - green(c2)) < threshold && abs(blue(c1) == blue(c2)) < threshold)
Remember, you have to take the absolute difference! This way, if you decrease the threshold only very similar colours are considered the same while is you increase it different colours can be considered the same. That threshold number depends on your likings!
This would also work with your brightness example...
int threshold = 5
if(abs(brightness(c1) - brightness(c2)) < threshold)
To extend on Petros's answer. Generally, when I am comparing image pixels, I normalize, so that the code will work with images that are not in standard range 0-255. It also is good when you are doing many operations on the images to keep in mind the range you are currently working with for scaling purposes.
MAX_PIXEL=255 //maybe range is different for some reason
MIN_PIXEL=0
pixel_difference = 10
threshold = pixel_difference/(MAX_PIXEL-MIN_PIXEL)
if ( abs( (brightness(c1)-brightness(c2))/(MAX_PIXEL-MIN_PIXEL))< threshold ) {
//then the pixels are similar.
}
Sometimes you can gain more ground by transforming to a difference color space.
And depending on your task at hand you can build a background model that can adapt over time or compare higher level global features such as histograms or local features such as Scale Invariant Feature Transform (SIFT), or Corners, Edges.
I have a graph with zoom features. My main observation was that the x-axis updated its scale based on my current zoom level. I wanted the y-axis to do this too, so enabled zoom.y(y) , the undesired side affect being that now the user can zoom out in all directions, even into negative values "below" the graph.
http://jsfiddle.net/ericps/xJ3Ke/5/
var zoom = d3.behavior.zoom().scaleExtent([0.2, 5])
.on("zoom", draw); doesn't seem to really take the y-axis into account. And the user can still drag the chart anywhere in any direction to infinity.
One idea I thought of was independent of having zoom.y(y) enabled, and simply requires redrawing the y-axis based on what it is in the currently visible range. Like some kind of redraw based on the position of the X axis only. I don't want up and down scrolling at all now, only left and right
aside from commenting out //zoom.y(y) how would this be done? Insight appreciated.
All you need to do is update the y scale domain in your draw method.
The zoom function will modify the associated scales and set their domain to simulate a zoom. So you can get your x visible data bounds by doing x.invert(0) and x.invert(width), for example. If you converted your data to use Date's instead of strings then this is what I would suggest you use to filter, it woudl probably be more efficient.
As it is though, you can still use the x scale to filter to your visible data, find the y-axis extents of those values, and set your y scales domain to match accordingly. And in fact you can do all this in just a few lines (in your zoom update callback):
var yExtent = d3.extent(data.filter(function(d) {
var dt = x(d.date);
return dt > 0 && dt < width;
}), function(d) { return d.value; });
y.domain(yExtent).nice();
You can try it out here
To better explain what is going on:
The zoom behaviour listens to mouse events and modifies the range of the associated scales.
The scales are used by the axes which draw them as lines with ticks, and the scales are also used by the data associated with your paths and areas as you've set them up in callbacks.
So when the zoom changes it fires a callback and the basic method is what you had:
svg.select("g.x.axis").call(xAxis);
svg.select("g.y.axis").call(yAxis);
svg.select("path.area").attr("d", area);
svg.select("path.line").attr("d", line);
we redraw the x- and y- axes with the newly updated domains and we redraw (recompute) the area and the line - also with the newly domained x- and y- scales.
So to get the behaviour you wanted we take away the default zoom behaviour on the y scale and instead we will modify the y scales domain ourselves whenever we get a zoom or pan: conveniently we already have a callback for those actions because of the zoom behaviour.
The first step to compute our y scale's domain is to figure out which data values are visible. The x axis has been configured to output to a range of 0 to width and the zoom behaviour has updated the x scale's domain so that only a subset of the original domain outputs to this range. So we use the javascript array's filter method to pull out only those data objects whose mapping puts them in our visible range:
data.filter(function(d) {
var dt = x(d.date);
return dt > 0 && dt < width;
}
Then we use the handy d3 extent method to return the min and max values in an array. But because our array is all objects we need an accessor function so that the extents method has some numbers to actually compare (this is a common pattern in D3)
d3.extents(filteredData, function(d) { return d.value; });
So now we know the min and max values for all the data points that are drawn given our current x scale. The last bit is then just to set the y scale's domain and continue as normal!
y.domain(yExtent).nice();
The nice method I found in the api because it's the kind of thing you want a scale to do and d3 often does things for you that you want to do.
A great tutorial for figuring out some of this stuff is: http://alignedleft.com/tutorials/
It is worth stepping through even the parts you think you know already.
I have a hunch this has been done before but I am a total layman at this and don't know how to begin to ask the right question. So I will describe what I am trying to do...
I have an unknown ARGB color. I only know its absolute RGB value as displayed over two known opaque background colors, for example black 0x000000 and white 0xFFFFFF. So, to continue the example, if I know that the ARGB color is RGB 0x000080 equivalent when displayed over 0x000000 and I know that the same ARGB color is RGB 0x7F7FFF equivalent when displayed over 0xFFFFFF, is there a way to compute what the original ARGB color is?
Or is this even possible???
So, you know that putting (a,r,g,b) over (r1,g1,b1) gives you (R1,G1,B1) and that putting it over (r2,g2,b2) gives you (R2,G2,B2). In other words -- incidentally I'm going to work here in units where a ranges from 0 to 1 -- you know (1-a)r1+ar=R1, (1-a)r2+ar=R2, etc. Take those two and subtract: you get (1-a)(r1-r2)=R1-R2 and hence a=1-(R1-R2)/(r1-r2). Once you know a, you can work everything else out.
You should actually compute the values of a you get from doing that calculation on all three of {R,G,B} and average them or something, to reduce the effects of roundoff error. In fact I'd recommend that you take a = 1 - [(R1-R2)sign(r1-r2) + (G1-G2)sign(g1-g2) + (B1-B2)sign(b1-b2)] / (|r1-r2|+|g1-g2|+|b1-b2), which amounts to weighting the more reliable colours more highly.
Now you have, e.g., r = (R1-(1-a)r1)/a = (R2-(1-a)r2)/a. These two would be equal if you had infinite-precision values for a,r,g,b, but of course in practice they may differ slightly. Average them: r = [(R1+R2)-(1-a)(r1+r2)]/2a.
If your value of a happens to be very small then you'll get only rather unreliable information about r,g,b. (In the limit where a=0 you'll get no information at all, and there's obviously nothing you can do about that.) It's possible that you may get numbers outside the range 0..255, in which case I don't think you can do better than just clipping.
Here's how it works out for your particular example. (r1,g1,b1)=(0,0,0); (r2,g2,b2)=(255,255,255); (R1,G1,B1)=(0,0,128); (R2,G2,B2)=(127,127,255). So a = 1 - [127+127+127]/[255+255+255] = 128/255, which happens to be one of the 256 actually-possible values of a. (If it weren't, we should probably round it at this stage.)
Now r = (127-255*127/255)*255/256 = 0; likewise g = 0; and b = (383-255*127/255)*255/256 = 255.
So our ARGB colour was 80,00,00,FF.
Choosing black and white as the background colors is the best choice, both for ease of calculation and accuracy of result. With lots of abuse of notation....
a(RGB) + (1-a)0xFFFFFF = 0x7F7FFF
a(RGB) + (1-a)0x000000 = 0x000080
Subtracting the second from the first...
(1-a)0xFFFFFF = 0x7F7FFF-0x000080 = 0x7F7F7F
So
(1-a) = 0x7F/0xFF
a = (0xFF-0x7F)/0xFF = 0x80/0xFF
A = 0x80
and RGB = (a(RGB))/a = 0x000080/a = 0x0000FF
You can do something very similar with other choices of background color. The smaller a is and the closer the two background colors are the less accurately you will be able to determine the RGBA value. Consider the extreme cases where A=0 or where the two background colors are the same.
I have code that needs to render regions of my object differently depending on their location. I am trying to use a colour map to define these regions.
The problem is when I sample from my colour map, I get collisions. Ie, two regions with different colours in the colourmap get the same value returned from the sampler.
I've tried various formats of my colour map. I set the colours for each region to be "5" apart in each case;
Indexed colour
RGB, RGBA: region 1 will have RGB 5%,5%,5%. region 2 will have RGB 10%,10%,10% and so on.
HSV Greyscale: region 1 will have HSV 0,0,5%. region 2 will have HSV 0,0,10% and so on.
(Values selected in The Gimp)
The tex2D sampler returns a value [0..1].
[ I then intend to derive an int array index from region. Code to do with that is unrelated, so has been removed from the question ]
float region = tex2D(gColourmapSampler,In.UV).x;
Sampling the "5%" colour gave a "region" of 0.05098 in hlsl.
From this I assume the 5% represents 5/100*255, or 12.75, which is rounded to 13 when stored in the texture. (Reasoning: 0.05098 * 255 ~= 13)
By this logic, the 50% should be stored as 127.5.
Sampled, I get 0.50196 which implies it was stored as 128.
the 70% should be stored as 178.5.
Sampled, I get 0.698039, which implies it was stored as 178.
What rounding is going on here?
(127.5 becomes 128, 178.5 becomes 178 ?!)
Edit: OK,
http://en.wikipedia.org/wiki/Bankers_rounding#Round_half_to_even
Apparently this is "banker's rounding". I have no idea why this is being used, but it solves my problem. Apparently, it's a Gimp issue.
I am using Shader Model 2 and FX Composer. This is my sampler declaration;
//Colour map
texture gColourmapTexture <
string ResourceName = "Globe_Colourmap_Regions_Greyscale.png";
string ResourceType = "2D";
>;
sampler2D gColourmapSampler : register(s1) = sampler_state {
Texture = <gColourmapTexture>;
#if DIRECT3D_VERSION >= 0xa00
Filter = MIN_MAG_MIP_LINEAR;
#else /* DIRECT3D_VERSION < 0xa00 */
MinFilter = Linear;
MipFilter = Linear;
MagFilter = Linear;
#endif /* DIRECT3D_VERSION */
AddressU = Clamp;
AddressV = Clamp;
};
I never used HLSL, but I did use GLSL a while back (and I must admit it's terribly far in my head).
One issue I had with textures is that 0 is not the first pixel. 1 is not the second one. 0 is the edge of the texture and 1 is the right edge of the first pixel. The values get interpolated automatically and that can cause serious trouble if what you need is precision like when applying a lookup table rather than applying a normal texture. You need to aim for the middle of the pixel, so asking for [0.5,0.5], [1.5,0.5] rather than [0,0], [1, 0] and so on.
At least, that's the way it was in GLSL.
Beware: region in levels[region] is rounded down. When you see 5 % in your image editor, the actual value in the texture 8b representation is 5/100*255 = 12.75, which may be either 12 or 13. If it is 12, the rounding down will hit you. If you want rounding to nearest, you need to change this to levels[region+0.5].
Another similar thing (already written by Louis-Philippe) which might hit you is texture coordinates rounding rules. You always need to hit a spot in the texel so that you are not in between of two texels, otherwise the result is ill-defined (you may get any of two randomly) and some of your source texels may disapper while other duplicate. Those rules are different for bilinar and point sampling, you may need to add half of texel size when sampling to compensate for this.
GIMP uses banker's rounding. Apparently.
This threw out my code to derive region indicies.