How to draw a curve that goes both up and down? - c#-4.0

I've used the following code to make 3 points, draw them to a bitmap, then draw the bitmap to the main form, however it seems to always draw point 3 before point 2, because the Y co-ordinate is lower then point 2's. Is there a way to get over this, as I need a curve that curves up and down, rather than just up
Bitmap bit = new Bitmap(490, 490);
Graphics g = Graphics.FromImage(bit);
Graphics form = this.CreateGraphics();
pntPoints[0] = this.pictureBox1.Location;
pntPoints[1] = new Point(100,300);
pntPoints[2] = new Point(200, 150);
g.DrawCurve(p, pntPoints);
form.DrawImage(bit, 0, 5);
bit.Dispose();
g.Dispose();

Y-coordinate for point 3 is not lower, it's actually higher. The (0;0) point of Graphics is in the left top corner, and the Y value increases from the top down rather than from the bottom up. So a point (0;100) will be higher than (0;200) on the result image.
If you want a curve that goes up then down, you should place your first point in (0; 489), your second point in (100, 190) and your third point in (200, 340).

I recommend you put in a debug function that will mark and identify the points themselves, so you can see exactly where they are. A pixel in an off color, the index of the point, and the coordinates together will help you identify what is going where.
Now, I'm wondering, are those two points really supposed to be absolute, or are they supposed to be relative to the first point?

Related

only writing visible points to disk of an overplotted scatterplot

I am creating matplotlib scatterplots of around 10000 points. At the point size I am using, this results in overplotting, i.e. some of the points will be hidden by the points that are plotted over them.
While I don't mind about the fact that I cannot see the hidden points, they are redundantly written out when I write the figure to disk as pdf (or other vector format), resulting in a large file.
Is there a way to create a vector image where only the visible points would be written to the file? This would be similar to the concept of "flattening" / merging layers in photo editing software. (I still like to retain the image as vector, as I would like to have the ability to zoom in).
Example plot:
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
random.seed(15)
df = pd.DataFrame({'x': np.random.normal(10, 1.2, 10000),
'y': np.random.normal(10, 1.2, 10000),
'color' : np.random.normal(10, 1.2, 10000)})
df.plot(kind = "scatter", x = "x", y = "y", c = "color", s = 80, cmap = "RdBu_r")
plt.show()
tl;dr
I don't know of any simple solution such as
RemoveOccludedCircles(C)
The algorithm below requires some implementation, but it shouldn't be too bad.
Problem reformulation
While we could try to remove existing circles when adding new ones, I find it easier to think about the problem the other way round, processing all circles in reverse order and pretending to draw each new circle behind the existing ones.
The main problem then becomes: How can I efficiently determine whether one circle would be completely hidden by another set of circles?
Conditions
In the following, I will describe an algorithm for the case where the circles are sorted by size, such that larger circles are placed behind smaller circles. This includes the special case where all circles have same size. An extension to the general case would actually be significantly more complicated as one would have to maintain a triangulation of the intersection points. In addition, I will make the assumption that no two circles have the exact same properties (radius and position). These identical circles could easily be filtered.
Datastructures
C: A set of visible circles
P: A set of control points
Control points will be placed in such a way that no newly placed circle can become visible unless either its center lies outside the existing circles or at least one control point falls inside the new circle.
Problem visualisation
In order to better understand the role of control poins, their maintenance and the algorithm, have a look at the following drawing:
Processing 6 circles
In the linked image, active control points are painted in red. Control points that are removed after each step are painted in green or blue, where blue points were created by computing intersections between circles.
In image g), the green area highlights the region in which the center of a circle of same size could be placed such that the corresponding circle would be occluded by the existing circles. This area was derived by placing circles on each control point and subtracting the resulting area from the area covered by all visible circles.
Control point maintenance
Whenever adding one circle to the canvas, we add four active points, which are placed on the border of the circle in an equidistant way. Why four? Because no circle of same or bigger size can be placed with its center inside the current circle without containing one of the four control points.
After placing one circle, the following assumption holds: A new circle is completely hidden by existing circles if
Its center falls into a visible circle.
No control point lies strictly inside the new circle.
In order to maintain this assumption while adding new circles, the set of control points needs to be updated after each addition of a visible circle:
Add 4 new control points for the new circle, as described before.
Add new control points at each intersection of the new circle with existing visible circles.
Remove all control points that lie strictly inside any visible circle.
This rule will maintain control points at the outer border of the visible circles in such a dense way that no new visible circle intersecting the existing circles can be placed without 'eating' at least one control point.
Pseudo-Code
AllCircles <- All circles, sorted from front to back
C <- {} // the set of visible circles
P <- {} // the set of control points
for X in AllCircles {
if (Inside(center(X), C) AND Outside(P, X)) {
// ignore circle, it is occluded!
} else {
C <- C + X
P <- P + CreateFourControlPoints(X)
P <- P + AllCuttingPoints(X, C)
RemoveHiddenControlPoints(P, C)
}
}
DrawCirclesInReverseOrder(C)
The functions 'Inside' and 'Outside' are a bit abstract here, as 'Inside' returns true if a point is contained in one or more circles from a seto circles and 'Outside' returns true if all points from a set of points lie outside of a circle. But none of the functions used should be hard to write out.
Minor problems to be solved
How to determine in a numerically stable way whether a point is strictly inside a circle? -> This shouldn't be too bad to solve as all points are never more complicated than the solution of a quadratic equation. It is important, though, to not rely solely on floating point representations as these will be numerically insufficient and some control points would probable get completely lost, effectively leaving holes in the final plot. So keep a symbolic and precise representation of the control point coordinates. I would try SymPy to tackle this problem as it seems to cover all the required math. The formula for intersecting circles can easily be found online, for example here.
How to efficiently determine whether a circle contains any control point or any visible circle contains the center of a new circle? -> In order to solve this, I would propose to keep all elements of P and C in grid-like structures, where the width and height of each grid element equals the radius of the circles. On average, the number of active points and visible circles per grid cell should be in O(1), although it is possible to contruct artificial setups with arbitrary amounts of elements per grid cell, which would turn the overall algorithm from O(N) to O(N * N).
Runtime thoughts
As mentioned above, I would expect the runtime to scale linearly with the number of circles on average, because the number of visible circles in each grid cell will be in O(N) unless constructed in an evil way.
The data structures should be easily maintainable in memory if the circle radius isn't excessively small and computing intersections between circles should also be quite fast. I'm curious about final computational time, but I don't expect that it would be much worse than drawing all circles in a naive way a single time.
My best guess would be to use a hexbin. Note that with a scatter plot, the dots that are plotted latest will be the only ones visible. With a hexbin, all coinciding dots will be averaged.
If interested, the centers of the hexagons can be used to again create a scatter plot only showing the minimum.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
np.random.seed(15)
df = pd.DataFrame({'x': np.random.normal(10, 1.2, 10000),
'y': np.random.normal(10, 1.2, 10000),
'color': np.random.normal(10, 1.2, 10000)})
fig, ax = plt.subplots(ncols=4, gridspec_kw={'width_ratios': [10,10,10,1]})
norm = plt.Normalize(df.color.min(), df.color.max())
df.plot(kind="scatter", x="x", y="y", c="color", s=10, cmap="RdBu_r", norm=norm, colorbar=False, ax=ax[0])
hexb = ax[1].hexbin(df.x, df.y, df.color, cmap="RdBu_r", norm=norm, gridsize=80)
centers = hexb.get_offsets()
values = hexb.get_array()
ax[2].scatter(centers[:,0], centers[:,1], c=values, s=10, cmap="RdBu_r", norm=norm)
plt.colorbar(hexb, cax=ax[3])
plt.show()
Here is another comparison. The number of dots is reduced with a factor of 10, and the plot is more "honest" as overlapping dots are averaged.

Libgdx sprite ring object passing through it

How is it possible object can pass through ring spirte like in the image below?
Please can you help me I have no idea how can i do that.
I think you posted a incorrect image. To get the image you posted you just have to draw the red bar on top of the black ring.
I guess you want the ring on the left side to be on top and the right side to be over so it visually goes through. Well this is simply not so easy in 2D since draw order.
I have a couple of suggestion you can explore.
Always draw the ring on top of the bar but when a collision is happening you calculate where the bar overlaps and don't draw the pixels in that place. You can use a Pixmap for calculations like this. Depending on the size of your images this could be very expensive to calculate each frame.
A faster but slightly more hacky way could be to split red bar in multiple images and if a certain part of it should be overlapped by the ring draw it first otherwise draw it after the ring. Depending on what the red bar is going to look in your end product and how much possible angles the bar could have I can imagine this can be very tricky to get right.
Use 3D for this. You could have a billboard with a slight angle for the ring and have the bar locked on the distance axis at the rings center. However, on certain angles of entrance and exit you will get Z fighting since the pixels will be at the same distance from the camera. This might or might not be noticable and I have no idea how LibGDX would handle Z fighting.
I wanna add this solution :
if the object gonna pass through the ring horizontally i propose to devise sprite ring in to to sprite (sprite 1 & sprite 2)
you just have to draw sprites in that order :
Sprite1
Sprite Object
Sprite2
You can do the same if the object is gonna pass through ring vertically
PS : this solution don't work if the object is going to passs through ring both Vertically and Horizontally
Hope this was helpfull
Good luck

Find right scale to draw an iteration of L-system

I made an OCaml program which draws on a Graphics window the representation of a given l-system using his definition,his commands interpretation and iteration.
The drawing is made using Turtle graphics(the turtle can draw a line,move to a given point and turn for a given angle).
The problem I have is that all the lines have the same size(and this is how it needs to be) and when I draw an L-system if i don't give the right line size the drawing goes out of graphics window as you can see on that picture.
I know I can move the drawing on the left but I always start drawing from the center of the window.What I need help for is how to set the right line size for a given sequence of cammands for example:
I have that list of instructions below :
ACAABAABABACACAACACACACAACAABABACAABAABABACACAACACACACAACAABABACAABAABABACACAACACACACAACAABABACAABAABABACACAACACACACAACAABA.
where A means: Draw a line of "X" size
B : turn π/2
C: turn -π/2.
How can I find the best value for X (size of the line) in order to have a drawing that stays on the graphics window.
The only solution I've found is to start from a given value (Example X=20) and try to draw the l-system with this value, if it goes out then try again with X/2 until it works !
Does anybody have a better idea?
You could do some analysis of the L-system to determine its range, and scale appropriately. However, this is not much different than just drawing it with an arbitrary size (say, 1) seeing how big it is, and scaling (once) to fit the screen (not just X/2 until works). For example, if you draw it with scale=1 and it is 40 units in size, and your screen is 400 units, you know you can draw with scale=10 and still fit. You can also use this first pass to determine XY offset so you can center it.
My idea is to make one pass to evaluate sizes of your labyrinth. Let (W: int) be your width variable. When painter moves West you decrement W and when your painter moves East you incerement W. If m1 is maximal possible value of W and m2 is minimal value (maybe, < 0) of W during process then total width of you diagram is padding + linewidth * (m1-m2)
For example: let's painter initially looks to East.
AAAAABABAAAAAABABA
i.e.
<<<<<.
>.>>>>
During process W will change this way:
AAAAABABAAAAA A B A BA
01234555543210-1-1-1-10
Robot makes 5 steps to East, moves up, and moves 6 steps on West, moves down and returns to start location. In this case m1 = 5 and m2 = -1 and you need canvas of size 5+(-1) multiply line width.

Change perspective in POV-Ray? (less convergence)

Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.

Algorithm for Polygon Image Fill

I want an efficient algorithm to fill polygon with an Image, I want to fill an Image into Trapezoid. currently I am doing it in two steps
1) First Perform StretchBlt on Image,
2) Perform Column by Column vertical StretchBlt,
Is there any better method to implement this? Is there any Generic and Fast algorithm which can fill any polygon?
Thanks,
Sunny
I can't help you with the distortion part, but filling polygons is pretty simple, especially if they are convex.
For each Y scan line have a table indexed by Y, containing a minX and maxX.
For each edge, run a DDA line-drawing algorithm, and use it to fill in the table entries.
For each Y line, now you have a minX and maxX, so you can just fill that segment of the scan line.
The hard part is a mental trick - do not think of coordinates as specifying pixels. Think of coordinates as lying between the pixels. In other words, if you have a rectangle going from point 0,0 to point 2,2, it should light up 4 pixels, not 9. Most problems with polygon-filling revolve around this issue.
ADDED: OK, it sounds like what you're really asking is how to stretch the image to a non-rectangular shape (but trapezoidal). I would do it in terms of parameters s and t, going from 0 to 1. In other words, a location in the original rectangle is (x + w0*s, y + h0*t). Then define a function such that s and t also map to positions in the trapezoid, such as ((x+t*a) + w0*s*(t-1) + w1*s*t, y + h1*t). This defines a coordinate mapping between the two shapes. Then just scan x and y, converting to s and t, and mapping points from one to the other. You probably want to have a little smoothing filter rather than a direct copy.
ADDED to try to give a better explanation:
I'm supposing both your rectangle and trapezoid have top and bottom edges parallel with the X axis. The lower-left corner of the rectangle is <x0,y0>, and the lower-left corner of the trapezoid is <x1,y1>. I assume the rectangle's width and height are <w,h>.
For the trapezoid, I assume it has height h1, and that it's lower width is w0, while it's upper width is w1. I assume it's left edge "slants" by a distance a, so that the position of its upper-left corner is <x1+a, y1+h1>. Now suppose you iterate <x,y> over the rectangle. At each point, compute s = (x-x0)/w, and t = (y-y0)/h, which are both in the range 0 to 1. (I'll let you figure out how to do that without using floating point.) Then convert that to a coordinate in the trapezoid, as xt = ((x1 + t*a) + s*(w0*(1-t) + w1*t)), and yt = y1 + h1*t. Then <xt,yt> is the point in the trapezoid corresponding to <x,y> in the rectangle. Now I'll let you figure out how to do the copying :-) Good luck.
P.S. And please don't forget - coordinates fall between pixels, not on them.
Would it be feasible to sidestep the problem and use OpenGL to do this for you? OpenGL can render to memory contexts and if you can take advantage of any hardware acceleration by doing this that'll completely dwarf any code tweaks you can make on the CPU (although on some older cards memory context rendering may not be able to take advantage of the hardware).
If you want to do this completely in software MESA may be an option.

Resources