can some one help to fit the array in kmeans clustering - python-3.x

when i try to fit it in kmeans clustering it throws error "ValueError: setting an array element with a sequence."
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5)
kmeans.fit(df)
Array decription.
Name: Vector, Length: 179, dtype: object
0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
1 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
10 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
100 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
101 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...

Your column has a list in it. It needs to be opened up into multiple columns before passing it to KMeans.
df = pd.read_json('/Users/roshansk/Downloads/NewsArticles.json')
#Extracting the vectors into columns
vectors = df.Vector.apply(pd.Seriesies)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5)
kmeans.fit(vectors)

Related

Pivoting ArrayType columns in pyspark

I have a pyspark dataframe with the following schema
+----------+-------------------+-----------------------------------+------------------+
| date| numeric_id| feature_column| city|
+----------+-------------------+-----------------------------------+------------------+
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Berlin|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Rome|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| NewYork|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Beijing|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Berlin|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Rome|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| NewYork|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Beijing|
+----------+-------------------+-----------------------------------+------------------+
I want to pivot the dataframe so that I can have each feature_column x city as a new column, grouped by date and numeric_id. The output dataframe should look like
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
| date| numeric_id| feature_column_Berlin| feature_column_Rome| feature_column_NewYork|feature_column_Beijing|
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0... |[0.0, 0.0, 0.0, 0... |
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0... |[0.0, 0.0, 0.0, 0... |
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
This is different from the question posted on pivoting strings Pivot String column on Pyspark Dataframe since I am dealing with ArrayType columns.
I'm thinking it would be easier to implement it in Pandas (but handling ArrayType columns will be tricky), so am curious about how to do it using spark SQL. Any suggestions?
//Initially I am creating the sample data to load the data in dataframe.
import org.apache.spark.sql.functions._
val df = Seq(("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Beijing"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Beijing"))
.toDF("date","numeric_id","feature_column","city")
df.groupBy("date","numeric_id").pivot("city")
.agg(collect_list("feature_column"))
.withColumnRenamed("Beijing","feature_column_Beijing")
.withColumnRenamed("Berlin","feature_column_Berlin")
.withColumnRenamed("NewYork","feature_column_NewYork")
.withColumnRenamed("Rome","feature_column_Rome").show()
You can see the output as below :

How to write a custom loss function in LGBM?

I have a binary cross-entropy implementation in Keras. I would like to implement the same one in LGBM as a custom loss. Now I understand LGBM of course has 'binary' objective built-in but I would like to implement this one custom-made on my own as a starter for some future enhancements.
Here is the code,
def custom_binary_loss(y_true, y_pred):
"""
Keras version of binary cross-entropy (works like charm!)
"""
# https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/backend.py#L4826
y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
term_0 = (1 - y_true) * K.log(1 - y_pred + K.epsilon()) # Cancels out when target is 1
term_1 = y_true * K.log(y_pred + K.epsilon()) # Cancels out when target is 0
return -K.mean(term_0 + term_1, axis=1)
# --------------------
def custom_binary_loss_lgbm(y_pred, train_data):
"""
LGBM version of binary cross-entropy
"""
y_pred = 1.0 / (1.0 + np.exp(-y_pred))
y_true = train_data.get_label()
y_true = np.expand_dims(y_true, axis=1)
y_pred = np.expand_dims(y_pred, axis=1)
epsilon_ = 1e-7
y_pred = np.clip(y_pred, epsilon_, 1 - epsilon_)
term_0 = (1 - y_true) * np.log(1 - y_pred + epsilon_) # Cancels out when target is 1
term_1 = y_true * np.log(y_pred + epsilon_) # Cancels out when target is 0
grad = -np.mean(term_0 + term_1, axis=1)
hess = np.ones(grad.shape)
return grad, hess
But using the above my LGBM model only predicts zeros. Now my dataset is balanced and everything looks cool so what's the error here?
params = {
'objective': 'binary',
'num_iterations': 100,
'seed': 21
}
ds_train = lgb.Dataset(df_train[predictors], y, free_raw_data=False)
reg_lgbm = lgb.train(params=params, train_set=ds_train, fobj=custom_binary_loss_lgbm)
I also tried with a different hessian hess = (y_pred * (1. - y_pred)).flatten(). Although I don't know what hessian really means it didn't work either!
list(map(lambda x: 1.0 / (1.0 + np.exp(-x)), reg_lgbm.predict(df_train[predictors])))
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, .............]
Try setting the metric parameter to the string "None" in params, like this:
params = {
'objective': 'binary',
'metric': 'None',
'num_iterations': 100,
'seed': 21
}
Otherwise, according to the documentation, the algorithm would choose a default evaluation method for objective set to 'binary'

Dense Vector Column to Sparse Vector Column

I have a unique situation where I need to go from a DenseVector to a Sparse Vector Column.
I am trying to implement the SMOTE technique I found here: https://github.com/Angkirat/Smote-for-Spark/blob/master/PythonCode.py
But on line 44 I had to change it from min_Array[neigh][0] - min_Array[i][0] to DenseVector(min_Array[neigh][0]) - DenseVector(min_Array[i][0]) due to an error.
Once I have the DenseVector column, I need to convert it back to a SparseVector column to union my data.
I have tried the Following:
df = sc.parallelize([
(1, DenseVector([0.0, 1.0, 1.0, 2.0, 1.0, 3.0, 0.0, 0.0, 0.0, 0.0])),
(2, DenseVector([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100.0])),
(3, DenseVector([0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])),
]).toDF(["row_num", "features"])
list_to_vector_udf = udf(lambda l: Vectors.sparse(l), VectorUDT())
df = df.withColumn('features', list_to_vector_udf(df["features"]))
"int() argument must be a string, a bytes-like object or a number, not 'DenseVector''
assembler = VectorAssembler(inputCols=['features'],outputCol='features')
df = assembler.transform(df)
"Data type struct<type:tinyint,size:int,indices:array<int>,values:array<double>> of column features is not supported."
It usually doesn't make too much sense to convert a dense vector to a sparse vector since dense vector has already taken the memory. If you really need to do this, look at the sparse vector API, it either accepts a list of pairs (indice, value) or you need to directly pass nonzero indices and values to the constructor. Something like the following:
from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.ml.linalg import DenseVector
df = sc.parallelize([
(1, DenseVector([0.0, 1.0, 1.0, 2.0, 1.0, 3.0, 0.0, 0.0, 0.0, 0.0])),
(2, DenseVector([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100.0])),
(3, DenseVector([0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])),
]).toDF(["row_num", "features"])
def to_sparse(dense_vector):
size = len(dense_vector)
pairs = [(i, v) for i, v in enumerate(dense_vector.values.tolist()) if v != 0]
return Vectors.sparse(size, pairs)
dense_to_sparse_udf = udf(to_sparse, VectorUDT())
df = df.withColumn('features', dense_to_sparse_udf(df["features"]))
df.show()
+-------+--------------------+
|row_num| features|
+-------+--------------------+
| 1|(10,[1,2,3,4,5],[...|
| 2| (10,[9],[100.0])|
| 3| (10,[1],[1.0])|
+-------+--------------------+

sympy solve() gives implicit/incorrect answer

I'm trying to solve an equation system with 16 equations and 16 unknowns using sympy but it doesn't seem to solve it well.
I want to solve the system [K][d]=[f] where [K] is the coefficients matrix, [d] the unknowns and [f] are constants. I know some unknowns "d" and some constants "f", so I have same number for both equations and unknowns, but when I substitute these values into the equations and try to solve it the results for all "dx" include "dx8". I checked the matrix determinant and is positive so I should get a unique answer.
Here is the code:
import sympy as sp
import numpy as np
K = np.array([[560000000.0, 0.0, -480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0,-80000000.0, 120000000.0, 0.0, -200000000.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 393333333.3, 120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0,80000000.0, -213333333.3, -200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[-480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0, 0.0, 0.0],
[80000000.0, -180000000.0, -200000000.0, 786666666.7, 120000000.0,-180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0, -426666666.7,-200000000.0, 0.0, 0.0, 0.0],
[0.0, 0.0, -480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0],
[0.0, 0.0, 80000000.0, -180000000.0, -200000000.0, 786666666.7,120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0,-426666666.7, -200000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -480000000.0, 120000000.0, 560000000.0,-200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -80000000.0, 80000000.0],
[0.0, 0.0, 0.0, 0.0, 80000000.0, -180000000.0, -200000000.0,393333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 120000000.0,-213333333.3],
[-80000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 560000000.0,-200000000.0, -480000000.0, 120000000.0, 0.0, 0.0, 0.0, 0.0],
[120000000.0, -213333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,-200000000.0, 393333333.3, 80000000.0, -180000000.0, 0.0, 0.0, 0.0,0.0],
[0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0, 0.0, 0.0,-480000000.0, 80000000.0, 1120000000.0, -200000000.0, -480000000.0,120000000.0, 0.0, 0.0],
[-200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0, 0.0, 0.0,120000000.0, -180000000.0, -200000000.0, 786666666.7, 80000000.0,-180000000.0, 0.0, 0.0],
[0.0, 0.0, 0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0,0.0, 0.0, -480000000.0, 80000000.0, 1120000000.0, -200000000.0,-480000000.0, 120000000.0],
[0.0, 0.0, -200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0,0.0, 0.0, 120000000.0, -180000000.0, -200000000.0, 786666666.7,80000000.0, -180000000.0],
[0.0, 0.0, 0.0, 0.0, 0.0, -200000000.0, -80000000.0, 120000000.0,0.0, 0.0, 0.0, 0.0, -480000000.0, 80000000.0, 560000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -200000000.0, 0.0, 80000000.0, -213333333.3,0.0, 0.0, 0.0, 0.0, 120000000.0, -180000000.0, 0.0, 393333333.3]])
x = [sp.var('dx'+ str(i+1)) for i in range(8)]
y = [sp.var('dy'+ str(i+1)) for i in range(8)]
fx = [sp.var('fx'+ str(i+1)) for i in range(8)]
fy = [sp.var('fy'+ str(i+1)) for i in range(8)]
xy = list(sum(zip(x, y), ()))
fxy = list(sum(zip(fx, fy), ()))
M = sp.Matrix(K)*sp.Matrix(xy)
Ec = [sp.Eq(M[i], fxy[i]) for i in range(16)]
#known values
d_kwn = [(dy1, 0), (dy2, 0), (dy3, 0), (dy4, 0)]
f_kwn = [(fx5, 0), (fy5, 0), (fx6, 0), (fy6, -3000), (fx7, 0), (fy7, -3000),(fx8, 0), (fy8, 0), (fx1, 0), (fx2, 0), (fx3, 0), (fx4, 0)]
for var in d_kwn:
for i, eq in enumerate(Ec):
Ec[i] = eq.subs(var[0], var[1])
for var in f_kwn:
for i, eq in enumerate(Ec):
Ec[i] = eq.subs(var[0], var[1])
Sols = sp.solvers.solve(Ec)
sp.Matrix(sorted(Sols.items(), key=str))
And this is the output I'm getting:
{dx1: dx8−3.54468009860439⋅10−6,
dx2: dx8−1.8414987360977⋅10−6,
dx3: dx8−2.11496606381994⋅10−7,
dx4: dx8+2.05943267588118⋅10−7,
dx5: dx8−1.24937663359153⋅10−6,
dx6: dx8−1.55655946713284⋅10−6,
dx7: dx8−1.08797652070783⋅10−6,
dy5: −2.10639657360695⋅10−6,
dy6: −6.26959460018537⋅10−6,
dy7: −6.32191585665888⋅10−6,
dy8: −2.7105825114088⋅10−6,
fy1: 439.746516706791,
fy2: 2640.65618690176,
fy3: 2399.44807607611,
fy4: 520.14922031534}
I don't know why I'm not getting a result for dx8. I tried adding more equations because theoretically: dx1 = dx4, dx2 = dx3, dx5 = dx8, dx6 = dx7 and so on. But it gives me and empty list.
Any help will be appreciated.
If you need to use Sympy, then the following may work. First we can solve the reduced system of equations only for unknown d values. Then once we know all d values we can calculate the unknown f values by doing [K][d]=[f] for only the unknown f equation numbers (not implemented in the code below).
import sympy as sp
import numpy as np
K = np.array([[560000000.0, 0.0, -480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0,-80000000.0, 120000000.0, 0.0, -200000000.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 393333333.3, 120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0,80000000.0, -213333333.3, -200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[-480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0, 0.0, 0.0],
[80000000.0, -180000000.0, -200000000.0, 786666666.7, 120000000.0,-180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0, -426666666.7,-200000000.0, 0.0, 0.0, 0.0],
[0.0, 0.0, -480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0],
[0.0, 0.0, 80000000.0, -180000000.0, -200000000.0, 786666666.7,120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0,-426666666.7, -200000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -480000000.0, 120000000.0, 560000000.0,-200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -80000000.0, 80000000.0],
[0.0, 0.0, 0.0, 0.0, 80000000.0, -180000000.0, -200000000.0,393333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 120000000.0,-213333333.3],
[-80000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 560000000.0,-200000000.0, -480000000.0, 120000000.0, 0.0, 0.0, 0.0, 0.0],
[120000000.0, -213333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,-200000000.0, 393333333.3, 80000000.0, -180000000.0, 0.0, 0.0, 0.0,0.0],
[0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0, 0.0, 0.0,-480000000.0, 80000000.0, 1120000000.0, -200000000.0, -480000000.0,120000000.0, 0.0, 0.0],
[-200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0, 0.0, 0.0,120000000.0, -180000000.0, -200000000.0, 786666666.7, 80000000.0,-180000000.0, 0.0, 0.0],
[0.0, 0.0, 0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0,0.0, 0.0, -480000000.0, 80000000.0, 1120000000.0, -200000000.0,-480000000.0, 120000000.0],
[0.0, 0.0, -200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0,0.0, 0.0, 120000000.0, -180000000.0, -200000000.0, 786666666.7,80000000.0, -180000000.0],
[0.0, 0.0, 0.0, 0.0, 0.0, -200000000.0, -80000000.0, 120000000.0,0.0, 0.0, 0.0, 0.0, -480000000.0, 80000000.0, 560000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -200000000.0, 0.0, 80000000.0, -213333333.3,0.0, 0.0, 0.0, 0.0, 120000000.0, -180000000.0, 0.0, 393333333.3]])
x = [sp.var('dx'+ str(i+1)) for i in range(8)]
y = [sp.var('dy'+ str(i+1)) for i in range(8)]
fx = [sp.var('fx'+ str(i+1)) for i in range(8)]
fy = [sp.var('fy'+ str(i+1)) for i in range(8)]
xy = list(sum(zip(x, y), ()))
fxy = list(sum(zip(fx, fy), ()))
M = sp.Matrix(K)*sp.Matrix(xy)
Ec = [sp.Eq(M[i], fxy[i]) for i in range(16)]
#known values
d_kwn = [(dy1, 0), (dy2, 0), (dy3, 0), (dy4, 0)]
f_kwn = [(fx5, 0), (fy5, 0), (fx6, 0), (fy6, -3000), (fx7, 0), (fy7, -3000),(fx8, 0), (fy8, 0), (fx1, 0), (fx2, 0), (fx3, 0), (fx4, 0)]
for var in d_kwn:
for i, eq in enumerate(Ec):
Ec[i] = eq.subs(var[0], var[1])
for var in f_kwn:
for i, eq in enumerate(Ec):
Ec[i] = eq.subs(var[0], var[1])
Ec_part = []
for i in [0,2,4,6,8,9,10,11,12,13,14,15]:
Ec_part.append(Ec[i])
unknwns = [*x, *y[4:8]]
Sols = sp.linsolve(Ec_part,unknwns)
Sols = next( iter(Sols) )
#sp.Matrix(sorted(Sols.items(), key=str))
It is convenient to solve system of linear equations in Numpy itself. The type of system you are solving appears in Finite Element Analysis often with boundary conditions. Is it fine if we only use Numpy? If yes, the following code will do the job. We already know which elements of f and d are known we can use Numpy array indexing to solve the reduced set of equations as follows:
import numpy as np
# The NxN Coefficients matrix
K = np.array([[560000000.0, 0.0, -480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0,-80000000.0, 120000000.0, 0.0, -200000000.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 393333333.3, 120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0,80000000.0, -213333333.3, -200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[-480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0, 0.0, 0.0],
[80000000.0, -180000000.0, -200000000.0, 786666666.7, 120000000.0,-180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0, -426666666.7,-200000000.0, 0.0, 0.0, 0.0],
[0.0, 0.0, -480000000.0, 120000000.0, 1120000000.0, -200000000.0,-480000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, -160000000.0,200000000.0, 0.0, -200000000.0],
[0.0, 0.0, 80000000.0, -180000000.0, -200000000.0, 786666666.7,120000000.0, -180000000.0, 0.0, 0.0, 0.0, 0.0, 200000000.0,-426666666.7, -200000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -480000000.0, 120000000.0, 560000000.0,-200000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -80000000.0, 80000000.0],
[0.0, 0.0, 0.0, 0.0, 80000000.0, -180000000.0, -200000000.0,393333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 120000000.0,-213333333.3],
[-80000000.0, 80000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 560000000.0,-200000000.0, -480000000.0, 120000000.0, 0.0, 0.0, 0.0, 0.0],
[120000000.0, -213333333.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,-200000000.0, 393333333.3, 80000000.0, -180000000.0, 0.0, 0.0, 0.0,0.0],
[0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0, 0.0, 0.0,-480000000.0, 80000000.0, 1120000000.0, -200000000.0, -480000000.0,120000000.0, 0.0, 0.0],
[-200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0, 0.0, 0.0,120000000.0, -180000000.0, -200000000.0, 786666666.7, 80000000.0,-180000000.0, 0.0, 0.0],
[0.0, 0.0, 0.0, -200000000.0, -160000000.0, 200000000.0, 0.0, 0.0,0.0, 0.0, -480000000.0, 80000000.0, 1120000000.0, -200000000.0,-480000000.0, 120000000.0],
[0.0, 0.0, -200000000.0, 0.0, 200000000.0, -426666666.7, 0.0, 0.0,0.0, 0.0, 120000000.0, -180000000.0, -200000000.0, 786666666.7,80000000.0, -180000000.0],
[0.0, 0.0, 0.0, 0.0, 0.0, -200000000.0, -80000000.0, 120000000.0,0.0, 0.0, 0.0, 0.0, -480000000.0, 80000000.0, 560000000.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -200000000.0, 0.0, 80000000.0, -213333333.3,0.0, 0.0, 0.0, 0.0, 120000000.0, -180000000.0, 0.0, 393333333.3]])
# A logical array for indexing
N = K.shape[0] # The number of columns in K
N_2 = int(N/2);
# Prepare the 'f'
fx = np.zeros( N_2 );
fy = np.zeros( N_2 );
fx[ [0,1,2,3,4,5,6,7] ] = np.array([0]*N_2) # Known values of fx
fy[ [4,5,6,7] ] = np.array([0,-3000,-3000,0])
f = np.concatenate( (fx,fy) )
# Solve for the unknown equations only
d = np.zeros( N )
rows = np.array([0,1,2,3,4,5,6,7,12,13,14,15])
rows = rows[:, np.newaxis]
columns = np.array([0,1,2,3,4,5,6,7,12,13,14,15])
d[ columns ] = np.linalg.solve( K[ rows, columns ], f[ columns ] )
# Calculate unknown f values
f[ [8,9,10,11] ] = K[ [8,9,10,11], [8,9,10,11] ]*d[[8,9,10,11]]

Gaussian elimination with partial pivoting (column)

I cannot find out the mistake I made, could anyone help me? Thanks very much!
import math
def GASSEM():
a0 = [12,-2,1,0,0,0,0,0,0,0,13.97]
a1 = [-2,12,-2,1,0,0,0,0,0,0,5.93]
a2 = [1,-2,12,-2,1,0,0,0,0,0,-6.02]
a3 = [0,1,-2,12,-2,1,0,0,0,0,8.32]
a4 = [0,0,1,-2,12,-2,1,0,0,0,-23.75]
a5 = [0,0,0,1,-2,12,-2,1,0,0,28.45]
a6 = [0,0,0,0,1,-2,12,-2,1,0,-8.9]
a7 = [0,0,0,0,0,1,-2,12,-2,1,-10.5]
a8 = [0,0,0,0,0,0,1,-2,12,-2,10.34]
a9 = [0,0,0,0,0,0,0,1,-2,12,-38.74]
A = [a0,a1,a2,a3,a4,a5,a6,a7,a8,a9] # 10x11 matrix
interchange=[0,0,0,0,0,0,0,0,0,0,0]
for i in range (1,10):
median = abs(A[i-1][i-1])
for m in range (i,10): #pivoting
if abs(A[m][i-1]) > median:
median = abs(A[m][i-1])
interchange = A[i-1]
A[i-1] = A[m]
A[m] = interchange
for j in range(i,10): #creating upper triangle matrix
A[j] = [A[j][k]-(A[j][i-1]/A[i-1][i-1])*A[i-1][k] for k in range(0,11)]
for t in range (0,10): #print the upper triangle matrix
print(A[t])
The output is not an upper triangle matrix, I'm getting lost in the for loops...
When I run this code, the output is
[12, -2, 1, 0, 0, 0, 0, 0, 0, 0, 13.97]
[0.0, 11.666666666666666, -1.8333333333333333, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.258333333333333]
[0.0, 0.0, 11.628571428571428, -1.842857142857143, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, -5.886428571428571]
[0.0, 0.0, -2.220446049250313e-16, 11.622235872235873, -1.8415233415233416, 1.0, 0.0, 0.0, 0.0, 0.0, 6.679281326781327]
[0.0, 0.0, -3.518258683818212e-17, 0.0, 11.622218698800275, -1.8415517150256329, 1.0, 0.0, 0.0, 0.0, -22.185475397706252]
[0.0, 0.0, 1.3530439218911067e-17, 0.0, 0.0, 11.62216239813737, -1.841549039580908, 1.0, 0.0, 0.0, 24.359991632712457]
[0.0, 0.0, 5.171101701700419e-18, 0.0, 0.0, 0.0, 11.622161705324444, -1.84154850220678, 1.0, 0.0, -3.131238144426707]
[0.0, 0.0, -3.448243038110395e-19, 0.0, 0.0, 0.0, 0.0, 11.62216144141611, -1.8415485389982904, 1.0, -13.0921440313208]
[0.0, 0.0, -4.995725026226573e-19, 0.0, 0.0, 0.0, 0.0, 0.0, 11.622161418001749, -1.8415485322346454, 8.534950160892514]
[0.0, 0.0, -4.9488445836100553e-20, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 11.622161417603511, -36.26114362292296]
This effectively is upper triangular. The absolute value of the 'non-zero' entries in the third column of the lower triangle are all less than 10e-15. Given that other values are 1 or greater, these small numbers look like floating point subtraction errors in A[j][k] - (A[j][i-1]/A[i-1][i-1])*A[i-1][k] that can be considered to be 0. Without more investigation, I don't know why the non-zero values are limited to this column.
For this data, the condition abs(A[m][i-1]) > median is never true, so the if block code is not tested.

Resources