I'm trying to generate a pixelated circle with pixi.js and am having some issues.
With this code:
for (var x = 0; x <= r; x+=10)
{
var drawY = Math.sqrt(Math.pow(r,2) - Math.pow(x,2)).roundTo(10);
for (var y = 0; y <= drawY; y += 10)
{
graphics.drawRect(x , y, 10, 10);
graphics.drawRect(y , x, 10, 10);
graphics.drawRect(-x , y, 10, 10);
graphics.drawRect(-y , x, 10, 10);
graphics.drawRect(x , -y, 10, 10);
graphics.drawRect(y , -x, 10, 10);
// graphics.drawRect(-x, -y, 10, 10);
// graphics.drawRect(-y, -x, 10, 10);
}
}
I get this:
If I uncomment those last two lines to give me the last quadrant, I get this:
If I comment everything except for those last two lines, I get this:
I thought maybe it had something to do with the size of the texture or maybe since it is drawing in negative coordinates, but neither of those make sense looking at the results from the other quadrants.
Edit
Messed around with my code and came up with a much more efficient way of drawing the circle, but now the problem seems even more strange.
graphics.drawRect(0,0,10,10);
gives:
while
graphics.drawRect(0,0,10,10);
graphics.drawRect(0,-200,10,10);
gives the exact same thing (still only the one square), but
graphics.drawRect(0,0,10,10);
graphics.drawRect(0,200,10,10);
gives:
and
graphics.drawRect(0,0,10,10);
graphics.drawRect(-200,-200,10,10);
gives:
Update again
So if I draw a grid of squares around the entire thing, the circle draws perfectly.
Related
I am creating a playable maze in Three.js. All the objects —the floor, the ceiling and the walls— have been placed within positive X and Z axises. The camera is placed like this:
camera = new THREE.PerspectiveCamera(45,
window.innerWidth / window.innerHeight,
0.1,
1000
);
camera.position.set(0, 0.25, 0);
// The camera is placed pointing to the nearest open path from the
// initial position. Check mazeGenerator.js for the reference of
// "coordinates".
if(grid[1][0] == coordinates.W)
{
camera.lookAt(new THREE.Vector3(1, 0.25, 0));
}
else
{
camera.lookAt(new THREE.Vector3(0, 0.25, 1));
}
and it moves like that:
function keyboardCheck()
{
var movementSpeed = 0.02;
var rotationSpeed = 0.03;
if(keyboard.pressed('w') || keyboard.pressed('up'))
{
camera.translateZ(-movementSpeed);
}
if(keyboard.pressed('a') || keyboard.pressed('left'))
{
camera.rotateY(rotationSpeed);
}
if(keyboard.pressed('d') || keyboard.pressed('right'))
{
camera.rotateY(-rotationSpeed);
}
}
If I place the camera without the lookAt function, it seems that it faces [0, 0, 0], but of course what I wanted to do was help the player a little bit by making the camera face the nearest open path possible.
In order to make it move forward, I have to decrement Z, because otherwise when you hit the "move forward" key it goes backwards. This makes me think that even though I am telling the camera to look at a certain direction, the object's Z axis is opposite to the world's Z axis.
The question itself is: why is this happening?
Thank you very much in advance.
https://github.com/MikelAlejoBR/3D-Maze-95/tree/0.4
https://github.com/MikelAlejoBR/3D-Maze-95/issues/8
The camera is looking down its negative-z axis, so to move forward, do this:
camera.translateZ( - distance );
All other objects are considered to be looking in the direction of their positive-z axis, so to move other objects forward, you would use
object.translateZ( distance );
three.js r.85
I picked up Processing today, and wrote a program to generate a double slit interference pattern. After tweaking with the values a little, it works, but the pattern generated is fuzzier than what is possible in some other programs. Here's a screenshot:
As you can see, the fringes are not as smooth at the edges as I believe is possible. I expect them to look like this or this.
This is my code:
// All quantities in mm
float slit_separation = 0.005;
float screen_dist = 50;
float wavelength = 5e-4f;
PVector slit1, slit2;
float scale = 1e+1f;
void setup() {
size(500, 500);
colorMode(HSB, 360, 100, 1);
noLoop();
background(255);
slit_separation *= scale;
screen_dist *= scale;
wavelength *= scale;
slit1 = new PVector(-slit_separation / 2, 0, -screen_dist);
slit2 = new PVector(slit_separation / 2, 0, -screen_dist);
}
void draw() {
translate(width / 2, height / 2);
for (float x = -width / 2; x < width / 2; x++) {
for (float y = -height / 2; y < height / 2; y++) {
PVector pos = new PVector(x, y, 0);
float path_diff = abs(PVector.sub(slit1, pos).mag() - PVector.sub(slit2, pos).mag());
float parameter = map(path_diff % wavelength, 0, wavelength, 0, 2 * PI);
stroke(100, 100, pow(cos(parameter), 2));
point(x, y);
}
}
}
My code is mathematically correct, so I am wondering if there's something wrong I am doing in transforming the physical values to pixels on screen.
I'm not totally sure what you're asking- what exactly do you expect it to look like? Would it be possible to narrow this down to a single line that's misbehaving instead of the nested for loop?
But just taking a guess at what you're talking about: keep in mind that Processing enables anti-aliasing by default. To disable it, you have to call the noSmooth() function. You can call it in your setup() function:
void setup() {
size(500, 500);
noSmooth();
//rest of your code
It's pretty obvious if you compare them side-by-side:
If that's not what you're talking about, please post an MCVE of just one or two lines instead of a nested for loop. It would also be helpful to include a mockup of what you'd expect versus what you're getting. Good luck!
I've been trying to teach myself D3.js, but I can't seem to get semantic zoom (zooming positions but not shapes) to work for me.
I've read the d3 zoom docs here, and attempted to functionally copy the svg semantic zoom example code
This is my code:
var X, Y, circle, circles, h, i, j, svg, transform, w, zoom, _i, _j;
w = 1200;
h = 600;
circles = [];
for (j = _i = 0; _i <= 6; j = ++_i) {
for (i = _j = 0; _j <= 12; i = ++_j) {
circles.push({r: 25, cx: i * 50, cy: j * 50});
}
}
X = d3.scale.linear()
.domain([0, 1])
.range([0, 1]);
Y = d3.scale.linear()
.domain([0, 1])
.range([0, 1]);
zoom = d3.behavior.zoom()
.x(X)
.y(Y)
.on("zoom", function() {
return circle.attr("transform", transform);
});
transform = function(d) {
return "translate(" + (X(d.cx)) + ", " + (Y(d.cy)) + ")";
};
svg = d3.select("body")
.append("svg")
.attr("width", w)
.attr("height", h)
.call(zoom)
.append("g");
circle = svg.selectAll("circle")
.data(circles)
.enter().append("circle")
.attr("r", function(d) {
return d.r;
}).attr("cx", function(d) {
return d.cx;
}).attr("cy", function(d) {
return d.cy;
}).attr("transform", transform);
Live version at jsfiddle.
This should be pretty simple. I'm creating grid of circles that should exactly touch when no zoom is applied (distance is 50 px, diameter is 50 px). When I zoom in, I expect the circles to spread apart, with the point under the mouse remaining stationary. I expect the zoom to be smooth and linear with applied mouse wheeling. The circles should remain the same size, though, so that they stop touching when I zoom in; they should overlap when I zoom out.
Instead, initially, the circles are spread out exactly twice as far as they should be. When I zoom in and out, the center point is not under the mouse (and moves around depending on how I pan). Zoom is highly nonlinear, asymptotically approaching a scale of 1 (circles touching) as I zoom out, and rapidly accelerating as I zoom in.
This seems really odd, and I can't spot significant differences between my code and the semantic zoom example, which works as expected. I conclude that I don't actually understand how D3 zoom is supposed to work. Can someone sort me out?
Your code is very close to being correct: Working demo.
Use scale to map the location of objects
Instead of saving the exact location of objects in them and then using scales with range and domain set to [0, 1], use the scales to do the mapping for you:
for (j = _i = 0; _i <= 6; j = ++_i) {
for (i = _j = 0; _j <= 12; i = ++_j) {
circles.push({
r: 25,
cx: i,
cy: j,
color: "#000"
});
}
}
X = d3.scale.linear()
.domain([0, 6])
.range([0, w]);
Y = d3.scale.linear()
.domain([0, 12])
.range([0, h]);
The change here is that now D3 knows about the aspect ratio of your viewport and in what proportions it should transform the scales so as to keep the point under the svg static under the mouse. Otherwise, it was trying to zoom in and out of a square, resulting in a jarring experience.
The problem was the initial position of the circles stacking up on the translation.
Live code with the problem pointed out and fixed, and a few other modifications:
var size = 600
var scale = 100
circles = []
for (var j = 0; j<6; j++) {
for (var i = 0; i<6; i++) {
circles.push({x: i*scale, y: j*scale })
}
}
var X = d3.scale.linear()
.domain([0,6*scale])
.range([0,size])
var Y = d3.scale.linear()
.domain([0,6*scale])
.range([0,size])
function transform(d) {
return "translate("+X(d.x)+", "+Y(d.y)+")"
}
var circle /*fwd declaration*/
var zoom = d3.behavior.zoom()
.x(X).y(Y)
.on("zoom", function () {
circle.attr("transform", transform)
})
var svg = d3.select("body").append("svg")
.attr("width", size).attr("height", size)
.call(zoom)
.append("g")
circle = svg.selectAll("circle")
.data(circles)
.enter().append("circle")
.attr("r", 20)
/*the problem was this initial offset interfering with the
translation we were applying, resulting in very strange behavior*/
/* .attr("cx", function (d) {return d.x})
.attr("cy", function (d) {return d.y})*/
.attr("transform", transform)
The "scale" parameter should do nothing, but if you add in those commented lines, it affects the initial position and causes the non-intuitive effects.
The original problems were:
Initial scale appeared to be more zoomed than it should have been.
Zooming out very var produced a noticeable nonlinear asymptotic effect.
Zooming out then panning around, then zooming back in did not work at all like expected, with the diagram sliding under the mouse instead of staying pinned.
All of these are straightforward consequences of the initial position:
The initial distances appeared bigger because we applied their original positions plus the zoom translation.
The nonlinear asymptotic effect was the zoom translation distances going to zero asymptotically (as expected), but the initially applied distances not going to zero, giving the appearance of a nonzero zoom asymptote.
While zoomed out, D3 thinks it's zoomed out more than the user does (because of the extra distances between circles), which means when a pan is applied, the center of the image as D3 tracks it is moving differently than what the user expects, which causes the effect of the zoom center not being under the mouse.
You can play with these effects to understand them by uncommenting the initial position lines and applying the same zoom actions with different scale parameters. Commenting them causes the circles to initially be all at screen-space 0,0, so that only the zoom distance translation is applied, which is what we want.
Props to musically_ut's answer for suggesting the smaller world-space coordinate scale, which shouldn't have made any difference, but did, which helped me identify the problem.
I'm using Particle to draw irregular shapes in Three.js, the code snippet is like:
var hearts = function(context){
context.globalAlpha = 0.5;
var x = 0, y = 0;
context.scale(0.1, -0.1); // Scale so canvas render can redraw within bounds
context.beginPath();
context.bezierCurveTo(x + 2.5, y + 2.5, x + 2.0, y, x, y);
context.bezierCurveTo(x - 3.0, y, x - 3.0, y + 3.5, x - 3.0, y + 3.5);
...
context.closePath();
context.lineWidth = 0.1; //0.05
context.stroke();
}
var material = new THREE.ParticleCanvasMaterial({
program: heart,
blending: THREE.AdditiveBlending
});
material.color.setRGB(255, 0, 0);
var particle = new THREE.Particle(material);
what I want to do is select the irregular shape properly, my question is, if I draw shape this way, how can I get the color of every pixel so I can used in the picking algorithm
Thanks.
Have you looked into toDataURL()?
I use that in my three.js logic to grab and save the canvas out of the browser. From looking at this:
http://www.patrick-wied.at/blog/how-to-create-transparency-in-images-with-html5canvas
It looks to me like toDataURL() can also be used to peer into the RGB and A of each pixel, if need be change them and write it back to the visible framebuffer.
Using VC++ and Open CV. Here's what I'm trying to do:
find the first three nearly-horizontal hough lines and draw them.
find all the nearly-vertical lines and draw them
if any vertical line is above the horizontal line then a FLAG is set to 0
if there is no vertical Hough line above (all are below) the horizontal line then FLAG=1
int n, i,c=0;
int flag = 0;
cvCanny( src, dst, 50, 150, 3 );
lines = cvHoughLines2( dst, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 10, 5, 5 );
n = lines->total;
for( i = 0; i < n; i++ )
{
CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i);
CvPoint pt1, pt2, hpt1, hpt2, vpt1, vpt2;
int hy = 0, vy = 0;
pt1 = line[0];
pt2 = line[1];
theta = atan( (double)(pt2.y - pt1.y)/(pt2.x - pt1.x) ); /*slope of line*/
degree = theta*180/CV_PI;
if( fabs(degree) < 8) //checking for near horizontal line
{
c++;
if( c > 0 && c <5) /*main horizontal lines come first*/
{
cvLine( out, pt1, pt2, CV_RGB(255, 255,255), 1, CV_AA, 0 );
hpt1 = line[0];
hpt2 = line[1];
if( hpt1.y > hpt2.y ) //finds out lower end-point
hy = hpt1.y;
else
hy = hpt2.y;
}
}
if( fabs(degree) > 70 ) /*near vertical lines*/
{
cvLine( out, pt1, pt2, CV_RGB(255, 255,255), 1, CV_AA, 0 );
vpt1 = line[0];
vpt2 = line[1];
if( vpt1.y > vpt2.y ) //finds upper end-pt of vertical line
vy = vpt1.y;
else
vy = vpt2.y;
if( vy >= hy ) //if vert line is lower than horizontal line
flag = 1;
else
flag = 0;
}
}
display( out, "hough lines" );
return flag;
}
However for an image even if vertical lines are detected above the horizontal line -still the flag is returning 1. So am i counting along the axis wrongly? Please help me out.
The if( fabs(degree) > 70 ) and if( fabs(degree) < 8 ) lines look wrong. An angle of about 180 means almost horizontal ... you probably want to change that, and bear in mind the periodicity of angles (so about 360 is also almost horizontal). A way to nicely handle that would be to use if (fabs(cos(angle - desired_angle)) > 0.996), which means roughly "if angle and desired_angle are within 5 degrees of each other, disregarding direction". 0.996 is roughly the cosine of 5 degrees, if you need it more exact put more digits there - 0.9961946980917455 is a nearer match.
Also, your loop order is off. You don't find the first three nearly-horizontal hough lines and draw them. find all the nearly-vertical lines and draw them if any vertical line is above the horizontal line in this sequence, you loop over all lines, in any order, and process them independently - the vertical ones could come before the horizontal ones, so you wouldn't know what to check for.
Third,
if( hpt1.y > hpt2.y ) //finds out lower end-point
hy = hpt1.y;
else
hy = hpt2.y;
vs
if( vpt1.y > vpt2.y ) //finds upper end-pt of vertical line
vy = vpt1.y;
else
vy = vpt2.y;
You use the same code to find the lower coordinate as to find the higher one. Do you think that could work?
Fourth,
if( vy >= hy ) //if vert line is lower than horizontal line
flag = 1;
else
flag = 0;
The value of flag depends on the LAST pass through this piece of code. This doesn't match the any in your description.
A much easier approche is to not use the PPHL (progressive probabilistic Hough Lines algorithm) but the SHL (standard HL) ! So that you get the polar form of a line with angle and radius ! You can then just check the angle, without calculating it yourself.
If the output angle is around 0° or 180° it's a vertical line, and if it's around 90° it's a horizontal line....