I am intersecting a plane with 12 segments which represent the edges of a cube. The problem I have is that all the intersections produce one point, even if the point should not be part of the segment, because they should be finite, right?
Code:
from sympy import Point3D, Plane, intersection, Segment3D
# Cube's vertices
v = (Point3D(500, 500, 500), Point3D(-500, 500, 500), Point3D(-500, -500, 500),
Point3D(500, -500, 500), Point3D(500, 500, -500), Point3D(-500, 500, -500),
Point3D(-500, -500, -500), Point3D(500, -500, -500))
# Cube´s edges
a = (Segment3D(v[0], v[1]), Segment3D(v[1], v[2]),
Segment3D(v[2], v[3]), Segment3D(v[3], v[0]),
Segment3D(v[0], v[4]), Segment3D(v[1], v[5]),
Segment3D(v[2], v[6]), Segment3D(v[3], v[7]),
Segment3D(v[4], v[5]), Segment3D(v[5], v[6]),
Segment3D(v[6], v[7]), Segment3D(v[7], v[4]))
# Example plane which should generate 3 points
plano = Plane(Point3D(450, 400, 400), Point3D(400, 450, 400), Point3D(400, 400, 450))
bad = []
good = []
for i in range(12):
inter = intersection(plano, a[i])
# This should be executed when the intersection generates anything, but is always executed:
if inter:
bad.append(inter[0])
# This comparation should not be necessary, checks if point is in range desired
if abs(inter[0][0]) <= 500 and abs(inter[0][1]) <= 500 and abs(inter[0][2]) <= 500:
good.append(inter[0])
print(len(bad), bad)
print(len(good), good)
Output:
12 [Point3D(250, 500, 500), Point3D(-500, 1250, 500), Point3D(1250, -500, 500), Point3D(500, 250, 500), Point3D(500, 500, 250), Point3D(-500, 500, 1250), Point3D(-500, -500, 2250), Point3D(500, -500, 1250), Point3D(1250, 500, -500), Point3D(-500, 2250, -500), Point3D(2250, -500, -500), Point3D(500, 1250, -500)]
3 [Point3D(250, 500, 500), Point3D(500, 250, 500), Point3D(500, 500, 250)]
9 of the 12 points are not part of any segment
This is fixed on SymPy master but the fix is not yet in a released version. On master I get:
3 [Point3D(250, 500, 500), Point3D(500, 250, 500), Point3D(500, 500, 250)]
3 [Point3D(250, 500, 500), Point3D(500, 250, 500), Point3D(500, 500, 250)]
(which I assume is what you expect)
This is currently a bug (In sympy 1.4 or early) and will be fixed at 1.5 release.
https://github.com/sympy/sympy/pull/16637
Related
I've created a bunch of cubic bezier curves in 1 svg path and now trying to fill the path from top to bottom with dasharray and offset, but the animation keeps starting at every cubic bezier curve start point, so at every C in the path. how can I combine the lines, so the animation goes from the very top to the very bottom?
Here's my svg animation:
https://codepen.io/mj2023/pen/wvEMwyB
<style>
svg {
width: 600px;
overflow:visible;
}
#lined-svg path {
stroke-dasharray: 7867.43;
stroke-dashoffset: 7867.43;
animation: dash 5s linear alternate infinite;
}
#keyframes dash {
from {
stroke-dashoffset: 7867.43;
}
to {
stroke-dashoffset: 0;
}
}
</style>
<svg id="lined-svg" viewBox="0 0 600 4088">
<path stroke="red" fill="none" stroke-width="6" d="M 300, 0 C 300, 114, 600, 76, 600, 190 C 600, 304, 300, 266, 300, 380 M 300, 380, C 300, 542.6, 600, 488.4, 600, 651 C 600, 813.6, 300, 759.4, 300, 922 M 300, 922, C 300, 1003.6, 600, 976.4, 600, 1058 C 600, 1139.6, 300, 1112.4, 300, 1194 M 300, 1194, C 300, 1308, 600, 1270, 600, 1384 C 600, 1498, 300, 1460, 300, 1574 M 300, 1574, C 300, 1655.6, 600, 1628.4, 600, 1710 C 600, 1791.6, 300, 1764.4, 300, 1846 M 300, 1846, C 300, 2008.6, 600, 1954.4, 600, 2117 C 600, 2279.6, 300, 2225.4, 300, 2388 M 300, 2388, C 300, 2604.6, 600, 2532.4, 600, 2749 C 600, 2965.6, 300, 2893.4, 300, 3110 M 300, 3110, C 300, 3224, 600, 3186, 600, 3300 C 600, 3414, 300, 3376, 300, 3490 M 300, 3490, C 300, 3555.4, 600, 3533.6, 600, 3599 C 600, 3664.4, 300, 3642.6, 300, 3708 M 300, 3708, C 300, 3822, 600, 3784, 600, 3898 C 600, 4012, 300, 3974, 300, 4088"></path>
</svg>
I have two lists I want to create a pandas Dataframe with 3 columns whereby one of the columns contains a column generated by zipping two of the list. I tried the following
import pandas as pd
import numpy as np
S_x = [80, 90, 100, 200, 300, 600, 800, 900, 1000, 1200]
S_y = [800, 1000, 1200, 450, 80, 100, 60, 300, 700, 900]
S_z=list(zip(S_x,S_y))
frame4 = pd.DataFrame(np.column_stack([S_x, S_y,S_z]), columns=["Recovered Data", "Percentage Error","Zipped"])
In the column S_z I want the elements to be tuples as they appear in list S_z while the first two columns they should be the way they are. When I run my code I get the error
ValueError: Shape of passed values is (4, 10), indices imply (3, 10)
I don't know what I am making wrong. Am using Python 3.x
When you use np.column_stack, it automatically unzip your S_z and thus np.column_stack([S_x, S_y,S_z]) become of shape (10, 4) instead. Do like this:
frame4 = pd.DataFrame({"Recovered Data": S_x, "Percentage Error": S_y,"Zipped": S_z})
IIUC
frame=pd.DataFrame(zip(S_x, S_y, S_z), columns=["Recovered Data", "Percentage Error","Zipped"])
Recovered Data Percentage Error Zipped
0 80 800 (80, 800)
1 90 1000 (90, 1000)
2 100 1200 (100, 1200)
3 200 450 (200, 450)
4 300 80 (300, 80)
5 600 100 (600, 100)
6 800 60 (800, 60)
7 900 300 (900, 300)
8 1000 700 (1000, 700)
9 1200 900 (1200, 900)
I am trying to create a horizontal bar chart with matplotlib. My data points are the following two arrays
distance = [100, 200, 300, 400, 500, 3000]
value = [10, 15, 50, 74, 95, 98]
my code to generate the horizontal bar chart is as follows
plt.barh(distance, value, height=75)
plt.savefig(fig_name, dpi=300)
plt.close()
The problem is my image comes out like this
https://imgur.com/a/Q8dvHKR
Is there a way to ensure all blocks are the same width and to skip the spaces in between 500 and 300
You can do this making sure Matplotlib treats your labels like labels, not like numbers. You can do this by converting them to strings:
import matplotlib.pyplot as plt
distance = [100, 200, 300, 400, 500, 3000]
value = [10, 15, 50, 74, 95, 98]
distance = [str(number) for number in distance]
plt.barh(distance, value, height=0.75)
Note that you have to change the height.
Alternatively, you can use a range of numbers as y-values, using range() function, to position the horizontal bars and then set the tick-labels as desired using plt.yticks() function whose first argument is the positions of the ticks and the second argument is the tick-labels.
import matplotlib.pyplot as plt
distance = [100, 200, 300, 400, 500, 3000]
value = [10, 15, 50, 74, 95, 98]
plt.barh(range(len(distance)), value, height=0.6)
plt.yticks(range(len(distance)), distance)
plt.show()
I just want to understand the basic parameters and what they do specifically - width, height, angle, theta1, theta 2. I followed the official documentation and I understood what the centre is, but I don't get what the theta 1 or 2 does, or the angle does or what the length of the horizontal or vertical axis means.
I tried experimenting with the parameters using different numbers but failed to hit upon an accurate result.
I'm trying to create the arc of the 3-point area in the basketball court
The Arc type is a subclass of Ellipse, extended to add two values theta1 and theta2. The behaviour of angle is the same for both Ellipse and Arc and determines the angle at which the ellipse is drawn.
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(2,5))
ax = fig.add_subplot(1,1,1)
ax.set_ylim(0, 50)
ax.set_xlim(0, 20)
ax.axis('off')
a = Ellipse((10, 45), 10, 3, 0, color='red', lw=1)
ax.add_patch(a)
a = Ellipse((10, 40), 10, 3, 10, color='red', lw=1)
ax.add_patch(a)
a = Ellipse((10, 35), 10, 3, 20, color='red', lw=1)
ax.add_patch(a)
a = Ellipse((10, 30), 10, 3, 30, color='red', lw=1)
ax.add_patch(a)
for a in range(0, 360, 40):
a = Ellipse((10, 20), 10, 3, a, color='red', lw=1, fc='none')
ax.add_patch(a)
This produces —
Note that for a perfect circle (an ellipse of equal height and width) this makes no difference (as a circle is rotationally symmetrical).
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(2,4))
ax = fig.add_subplot(1,1,1)
ax.set_ylim(0, 40)
ax.set_xlim(0, 20)
ax.axis('off')
a = Ellipse((10, 25), 10, 10, 0, color='red', lw=1)
ax.add_patch(a)
a = Ellipse((10, 10), 10, 10, 45, color='red', lw=1)
ax.add_patch(a)
Both circles are the same.
The Arc documentation for the matplotlib.patches.Arc explains that theta 1 & 2 are —
theta1, theta2 : float, optional
Starting and ending angles of the arc in degrees. These values are relative to angle, .e.g. if angle = 45 and theta1 = 90 the absolute starting angle is 135. Default theta1 = 0, theta2 = 360, i.e. a complete ellipse.
The key statement there is "Default theta1 = 0, theta2 = 360, i.e. a complete ellipse." — these parameters are used to draw partial ellipses, to create an arc. theta1 is the angle (or position on) the ellipse at which to start drawing, and theta2 is when to stop. Note that the calculation of the ellipse is unaffected.
The following code draws a series of arcs which should make the logic apparent —
from matplotlib import pyplot as plt
from matplotlib.patches import Arc
fig = plt.figure(figsize=(2,5))
ax = fig.add_subplot(1,1,1)
ax.set_ylim(0, 50)
ax.set_xlim(0, 20)
ax.axis('off')
# A complete ellipse, using theta1=0, theta2=360.
a = Arc((10, 45), 10, 3, 0, 0, 360, color='red', lw=1)
ax.add_patch(a)
# Reduce theta2 to 350, last 10 deg of ellipse not drawn.
a = Arc((10, 40), 10, 3, 0, 0, 350, color='red', lw=1)
ax.add_patch(a)
# Rotate the ellipse (angle=90), theta1 & theta2 are relative to start angle & rotate too.
a = Arc((10, 30), 10, 3, 90, 0, 350, color='red', lw=1)
ax.add_patch(a)
# Rotate the ellipse (angle=180), as above.
a = Arc((10, 20), 10, 3, 180, 0, 350, color='red', lw=1)
ax.add_patch(a)
# Draw the top half of the ellipse (theta 0-180 deg).
a = Arc((10, 10), 10, 3, 0, 0, 180, color='red', lw=1)
ax.add_patch(a)
# Draw the bottom half of the ellipse (theta 180-360 deg).
a = Arc((10, 5), 10, 3, 0, 180, 360, color='red', lw=1)
ax.add_patch(a)
This produces the following image, with arcs drawn above going from top to bottom. Compare with the comments in the code for explanation.
I am trying to predict the best parameters for the random forest RandomizedSearchCV that is supposed to predict a continuous varible.
I've been looking at the following approach, in particular, changing scoring function and eventually settling for regression logistic function median_absolute_error. However, I think that the KFold cross-validation is not appropriate for my data, but I do not understand how I can, for example, use an iterable cv (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) as I can not run (as far as I understand) fit and predict on my model before the RandomizedSearchCV
def my_custom_score(y_true, y_pred, dates_, features, labels):
return median_absolute_error(y_true, y_pred)
...
for i in range(0, 3): #predict 3 10-point intervals
prediction_colour = ['g','r','c','m','y','k','w'][i%7]
date_for_test = randint(11, 200) #end of the trend
dates_for_test = range(date_for_test-10, date_for_test) #one predicted interval should have 10 date points
for idx, date_for_test_ in enumerate(sorted(dates_for_test, reverse=True)):
train_features = features[sorted(dates_for_test, reverse=True)[0]-2:]
train_labels = labels[sorted(dates_for_test, reverse=True)[0]-2:]
test_features = np.atleast_2d(features[date_for_test_])
test_labels = labels[date_for_test_] if date_for_test != 0 else 1.0
rf = RanzomForestRegressor(bootstrap=False, criterion='mse', max_features=5, min_weight_fraction_leaf=0, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False)
parameters = {"max_leaf_nodes": [2,5,10,15,20,25,30,35,40,45,50], "min_samples_leaf": [1,50,100,150,200,250,300,350,400,450,500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000], "min_samples_split": [2,50,100,150,200,250,300,350,400,450,500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000], 'n_estimators': [10, 100, 250, 500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500, 2750, 3000, 3250, 3500, 3750, 4000, 4250, 4500, 4750, 5000, 5250, 5500, 5750, 6000, 6250, 6500, 6750, 7000, 7250, 7500, 7750, 8000, 8250, 8500, 8750, 9000, 9250, 9500, 9750, 10000], 'max_depth':[1,50,100,150,200,250,300,350,400,450,500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000]}
grid_search = RandomizedSearchCV(cv=5, estimator=rf, param_distributions=parameters, n_iter=10, scoring=make_scorer(median_absolute_error))#, scoring=make_scorer(lambda x,y: my_custom_score(x, y, sorted(dates_for_test, reverse=True), features, labels), greater_is_better=False)))
grid_search.fit(train_features, train_labels)
rf = grid_search.best_estimator_
best_parameters=rf.get_params()
print ("best parameters")
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
predictions = rf.predict(test_features)
Also, with the current approach I get the same continuous value predicted on out-of-sample temporal data of few dates into the future (different colours on the graph):
Documentation is quite detailed on this matter, but I find it too detailed. I just get lost there. Maybe someone could point in the right direction?