Changing the values of matrix is changing the weights of the model - python-3.x

I am working with neural network weights and I am seeing a weird thing. I have written this code:
x = list(mnist_classifier.named_parameters())
weight = x[0][1].detach().cpu().numpy().squeeze()
print(weight)
So I get the following values:
[[[-0.2435195 0.05255396 -0.32765684]
[ 0.06372751 0.03564635 -0.31417745]
[ 0.14694464 -0.03277654 -0.10328879]]
[[-0.13716389 0.0128522 0.24107361]
[ 0.45231998 0.15497956 0.11112727]
[ 0.18206735 -0.22820294 -0.29146808]]
[[ 1.1747813 0.9206593 0.49848938]
[ 1.1558323 1.0859997 0.7743778 ]
[ 1.0287125 0.52122927 0.4096022 ]]
[[-0.2980809 -0.04358199 -0.26461622]
[-0.1165191 -0.2267315 0.37054354]
[ 0.4429275 0.44967037 0.06866694]]
[[ 0.39549246 0.10898255 0.32859102]
[-0.07753246 0.1628792 0.03021396]
[ 0.323148 0.5103844 0.16282919]]
....
Now, when I change the value of the first matrix weight[0] to 0.1, it changes the values of the original weights:
x = list(mnist_classifier.named_parameters())
weight = x[0][1].detach().cpu().numpy().squeeze()
weight[0] = weight[0] * 0 + 0.1
print(list(mnist_classifier.named_parameters()))
[('conv1.weight', Parameter containing:
tensor([[[[ 0.1000, 0.1000, 0.1000],
[ 0.1000, 0.1000, 0.1000],
[ 0.1000, 0.1000, 0.1000]]],
[[[-0.1372, 0.0129, 0.2411],
[ 0.4523, 0.1550, 0.1111],
[ 0.1821, -0.2282, -0.2915]]],
[[[ 1.1748, 0.9207, 0.4985],
[ 1.1558, 1.0860, 0.7744],
[ 1.0287, 0.5212, 0.4096]]],
...
What is going on here? How is weight[0] connected to the neural network?

I found the answer. Apparently, when copying np arrays, you are supposed to use copy() otherwise it's a pass-by reference. So using copy() helped.

Related

hell going on with stochastic gradient descent

I am working with multivariate linear regression and using stochastic gradient descent to optimize.
Working on this dataSet
http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/
for every run all hyperParameters and all remaining things are same, epochs=200 and alpha=0.1
when I first run then I got final_cost=0.0591, when I run the program again keeping everything same I got final_cost=1.0056
, running again keeping everything same I got final_cost=0.8214
, running again final_cost=15.9591, running again final_cost=2.3162 and so on and on...
As you can see that keeping everything same and running, again and again, each time the final cost changes by large amount sometimes so large like from 0.8 to direct 15.9 , 0.05 to direct 1.00 and not only this the graph of final cost after every epoch within the same run is every zigzag unlike in batch GD in which the cost graph decreases smoothly.
I can't understand that why SGD is behaving so weirdly, different results in the different run.
I tried the same with batch GD and everything is absolutely fine and smooth as per expectations. In case of batch GD no matter how many times I run the same code the result is exactly the same every time.
But in the case of SGD, I literally cried,
class Abalone :
def __init__(self,df,epochs=200,miniBatchSize=250,alpha=0.1) :
self.df = df.dropna()
self.epochs = epochs
self.miniBatchSize = miniBatchSize
self.alpha = alpha
print("abalone created")
self.modelTheData()
def modelTheData(self) :
self.TOTAL_ATTR = len(self.df.columns) - 1
self.TOTAL_DATA_LENGTH = len(self.df.index)
self.df_trainingData =
df.drop(df.index[int(self.TOTAL_DATA_LENGTH * 0.6):])
self.TRAINING_DATA_SIZE = len(self.df_trainingData)
self.df_testingData =
df.drop(df.index[:int(self.TOTAL_DATA_LENGTH * 0.6)])
self.TESTING_DATA_SIZE = len(self.df_testingData)
self.miniBatchSize = int(self.TRAINING_DATA_SIZE / 10)
self.thetaVect = np.zeros((self.TOTAL_ATTR+1,1),dtype=float)
self.stochasticGradientDescent()
def stochasticGradientDescent(self) :
self.finalCostArr = np.array([])
startTime = time.time()
for i in range(self.epochs) :
self.df_trainingData =
self.df_trainingData.sample(frac=1).reset_index(drop=True)
miniBatches=[self.df_trainingData.loc[x:x+self.miniBatchSize-
((x+self.miniBatchSize)/(self.TRAINING_DATA_SIZE-1)),:]
for x in range(0,self.TRAINING_DATA_SIZE,self.miniBatchSize)]
self.epochCostArr = np.array([])
for j in miniBatches :
tempMat = j.values
self.actualValVect = tempMat[ : , self.TOTAL_ATTR:]
tempMat = tempMat[ : , :self.TOTAL_ATTR]
self.desMat = np.append(
np.ones((len(j.index),1),dtype=float) , tempMat , 1 )
del tempMat
self.trainData()
currCost = self.costEvaluation()
self.epochCostArr = np.append(self.epochCostArr,currCost)
self.finalCostArr = np.append(self.finalCostArr,
self.epochCostArr[len(miniBatches)-1])
endTime = time.time()
print(f"execution time : {endTime-startTime}")
self.graphEvaluation()
print(f"final cost :
{self.finalCostArr[len(self.finalCostArr)-1]}")
print(self.thetaVect)
def trainData(self) :
self.predictedValVect = self.predictResult()
diffVect = self.predictedValVect - self.actualValVect
partialDerivativeVect = np.matmul(self.desMat.T , diffVect)
self.thetaVect -=
(self.alpha/len(self.desMat))*partialDerivativeVect
def predictResult(self) :
return np.matmul(self.desMat,self.thetaVect)
def costEvaluation(self) :
cost = sum((self.predictedValVect - self.actualValVect)**2)
return cost / (2*len(self.actualValVect))
def graphEvaluation(self) :
plt.title("cost at end of all epochs")
x = range(len(self.epochCostArr))
y = self.epochCostArr
plt.plot(x,y)
plt.xlabel("iterations")
plt.ylabel("cost")
plt.show()
I kept epochs=200 and alpha=0.1 for all runs but I got a totally different result in each run.
The vector mentioned below is the theta vector where the first entry is the bias and remaining are weights
RUN 1 =>>
[[ 5.26020144]
[ -0.48787333]
[ 4.36479114]
[ 4.56848299]
[ 2.90299436]
[ 3.85349625]
[-10.61906207]
[ -0.93178027]
[ 8.79943389]]
final cost : 0.05917831328836957
RUN 2 =>>
[[ 5.18355814]
[ -0.56072668]
[ 4.32621647]
[ 4.58803884]
[ 2.89157598]
[ 3.7465471 ]
[-10.75751065]
[ -1.03302031]
[ 8.87559247]]
final cost: 1.0056239103948563
RUN 3 =>>
[[ 5.12836056]
[ -0.43672936]
[ 4.25664898]
[ 4.53397465]
[ 2.87847224]
[ 3.74693215]
[-10.73960775]
[ -1.00461585]
[ 8.85225402]]
final cost : 0.8214901206702101
RUN 4 =>>
[[ 5.38794798]
[ 0.23695412]
[ 4.43522951]
[ 4.66093372]
[ 2.9460605 ]
[ 4.13390252]
[-10.60071883]
[ -0.9230675 ]
[ 8.87229324]]
final cost: 15.959132174895712
RUN 5 =>>
[[ 5.19643132]
[ -0.76882106]
[ 4.35445135]
[ 4.58782119]
[ 2.8908931 ]
[ 3.63693031]
[-10.83291949]
[ -1.05709616]
[ 8.865904 ]]
final cost: 2.3162151072779804
I am unable to figure out what is going Wrong. Does SGD behave like this or I did some stupidity while transforming my code from batch GD to SGD. And if SGD behaves like this then how I get to know that how many times I have to rerun because I am not so lucky that every time in the first run I got such a small cost like 0.05 sometimes the first run gives cost around 10.5 sometimes 0.6 and maybe rerunning it a lot of times I got cost even smaller than 0.05.
when I approached the exact same problem with exact same code and hyperParameters just replacing the SGD function with normal batch GD I get the expected result i.e, after each iteration over the same data my cost is decreasing smoothly i.e., a monotonic decreasing function and no matter how many times I rerun the same program I got exactly the same result as this is very obvious.
"keeping everything same but using batch GD for epochs=20000 and alpha=0.1
I got final_cost=2.7474"
def BatchGradientDescent(self) :
self.costArr = np.array([])
startTime = time.time()
for i in range(self.epochs) :
tempMat = self.df_trainingData.values
self.actualValVect = tempMat[ : , self.TOTAL_ATTR:]
tempMat = tempMat[ : , :self.TOTAL_ATTR]
self.desMat = np.append( np.ones((self.TRAINING_DATA_SIZE,1),dtype=float) , tempMat , 1 )
del tempMat
self.trainData()
if i%100 == 0 :
currCost = self.costEvaluation()
self.costArr = np.append(self.costArr,currCost)
endTime = time.time()
print(f"execution time : {endTime - startTime} seconds")
self.graphEvaluation()
print(self.thetaVect)
print(f"final cost : {self.costArr[len(self.costArr)-1]}")
SomeBody help me figure out What actually is going on. Every opinion/solution is big revenue for me in this new field :)
You missed the most important and only difference between GD ("Gradient Descent") and SGD ("Stochastic Gradient Descent").
Stochasticity - Literally means "the quality of lacking any predictable order or plan". Meaning randomness.
Which means that while in the GD algorithm, the order of the samples in each epoch remains constant, in SGD the order is randomly shuffled at the beginning of every epochs.
So every run of GD with the same initialization and hyperparameters will produce the exact same results, while SGD will most defiantly not (as you have experienced).
The reason for using stochasticity is to prevent the model from memorizing the training samples (which will results in overfitting, where accuracy on the training set will be high but accuracy on unseen samples will be bad).
Now regarding to the big differences in final cost values between runs at your case, my guess is that your learning rate is too high. You can use a lower constant value, or better yet, use a decaying learning rate (which gets lower as epochs get higher).

Log_probabilities returned by tf.nn.ctc_beam_search_decoder

I am training a LSTM-CTC speech recognition system with using beam search decoding in the following configuration:
decoded, log_prob =
tf.nn.ctc_beam_search_decoder(
inputs,
sequence_length,
beam_width=100,
top_paths=3,
merge_repeated=True
)
The output of log_probabilities for a batch by the above decoder are like:
[[ 14.73168373, 14.45586109, 14.35735512],
[ 20.45407486, 20.44991684, 20.41798401],
[ 14.9961853 , 14.925807 , 14.88066769],
...,
[ 18.89863396, 18.85992241, 18.85712433],
[ 3.93567419, 3.92791557, 3.89198923],
[ 14.56258488, 14.55923843, 14.51092243]],
So how do these scores represent log probabilities and if I want to compare confidence for top paths among examples then what will be the normalisation factor?

logstash parse complex message from Telegram

I'm processing through Telegram history (txt file) and I need to extract & process quite complex (nested) multiline pattern.
Here's the whole pattern
Free_Trade_Calls__AltSignals:IOC/ BTC (bittrex)
BUY : 0.00164
SELL :
TARGET 1 : 0.00180
TARGET 2 : 0.00205
TARGET 3 : 0.00240
STOP LOS : 0.000120
2018-04-19 15:46:57 Free_Trade_Calls__AltSignals:TARGET
basically I am looking for a pattern starting with
Free_Trade_Calls__AltSignals: ^%(
and ending with a timestamp.
Inside that pattern (telegram message)
- exchange - in brackets in the 1st line
- extract value after BUY
- SELL values in a array of 3 SELL[3] : target 1-3
- STOP loss value (it can be either STOP, STOP LOSS, STOP LOS)....
I've found this Logstash grok multiline message but I am very new to logstash firend advised it to me. I was trying to parse this text in NodeJS but it really is pain in the ass and mad about it.
Thanks Rob :)
Since you need to grab values from each line, you don't need to use multi-line modifier. You can skip empty line with %{SPACE} character.
For your given log, this pattern can be used,
Free_Trade_Calls__AltSignals:.*\(%{WORD:exchange}\)\s*BUY\s*:\s*%{NUMBER:BUY}\s*SELL :\s*TARGET 1\s*:\s*%{NUMBER:TARGET_1}\s*TARGET 2\s*:\s*%{NUMBER:TARGET_2}\s*TARGET 3\s*:\s*%{NUMBER:TARGET_3}\s*.*:\s*%{NUMBER:StopLoss}
please note that \s* equals to %{SPACE}
It will output,
{
"exchange": [
[
"bittrex"
]
],
"BUY": [
[
"0.00164"
]
],
"BASE10NUM": [
[
"0.00164",
"0.00180",
"0.00205",
"0.00240",
"0.000120"
]
],
"TARGET_1": [
[
"0.00180"
]
],
"TARGET_2": [
[
"0.00205"
]
],
"TARGET_3": [
[
"0.00240"
]
],
"StopLoss": [
[
"0.000120"
]
]
}

genfromtxt return numpy array not separated by comma

I have a *.csv file that store two columns of float data.
I am using this function to import it but it generates the data not separated with comma.
data=np.genfromtxt("data.csv", delimiter=',', dtype=float)
output:
[[ 403.14915 150.560364 ]
[ 403.7822265 135.13165 ]
[ 404.5017 163.4669 ]
[ 434.02465 168.023224 ]
[ 373.7655 177.904114 ]
[ 450.608429 208.4187315]
[ 454.39475 239.9666595]
[ 453.8055 248.4082 ]
[ 457.5625305 247.70315 ]
[ 451.729431 258.19335 ]
[ 366.74405 225.169922 ]
[ 377.0055235 258.110077 ]
[ 380.3581 261.760071 ]
[ 383.98615 262.33805 ]
[ 388.2516785 272.715332 ]
[ 408.378174 200.9713135]]
How to format it to get a numpy array like
[[ 403.14915, 150.560364 ]
[ 403.7822265, 135.13165 ],....]
?
NumPy doesn't display commas when you print arrays. If you really want to see them, you can use
print(repr(data))
The repr function forces a str representation not ment for "nice" printing, but for the literal representation you would use yourself to type the data in your code.

Hierarchical Clustering with cosine similarity metric in fcluster package

I use scipy.cluster.hierarchy to do a hierarchical clustering on a set of points using "cosine" similarity metric. As an example, I have:
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
Points =
np.array([[ 0. , 0.23508573],
[ 0.00754775 , 0.26717266],
[ 0.00595464 , 0.27775905],
[ 0.01220563 , 0.23622067],
[ 0.00542628 , 0.14185873],
[ 0.03078922 , 0.11273108],
[ 0.06707743 ,-0.1061131 ],
[ 0.04411757 ,-0.10775407],
[ 0.01349434 , 0.00112159],
[ 0.04066034 , 0.11639591],
[ 0. , 0.29046682],
[ 0.07338036 , 0.00609912],
[ 0.01864988 , 0.0316196 ],
[ 0. , 0.07270636],
[ 0. , 0. ]])
z = hac.linkage(Points, metric='cosine', method='complete')
labels = hac.fcluster(z, 0.1, criterion="distance")
plt.scatter(Points[:, 0], Points[:, 1], c=labels.astype(np.float))
plt.show()
Since I use cosine metric, in some cases the dot product of two vectors can be negative or norm of some vectors can be zero. It means z output will have some negative or infinite elements which is not valid for fcluster (as below):
z =
[[ 0.00000000e+00 1.00000000e+01 0.00000000e+00 2.00000000e+00]
[ 1.30000000e+01 1.50000000e+01 0.00000000e+00 3.00000000e+00]
[ 8.00000000e+00 1.10000000e+01 4.26658708e-13 2.00000000e+00]
[ 1.00000000e+00 2.00000000e+00 2.31748880e-05 2.00000000e+00]
[ 3.00000000e+00 4.00000000e+00 8.96700489e-05 2.00000000e+00]
[ 1.60000000e+01 1.80000000e+01 3.98805492e-04 5.00000000e+00]
[ 1.90000000e+01 2.00000000e+01 1.33225099e-03 7.00000000e+00]
[ 5.00000000e+00 9.00000000e+00 2.41120340e-03 2.00000000e+00]
[ 6.00000000e+00 7.00000000e+00 1.52914684e-02 2.00000000e+00]
[ 1.20000000e+01 2.20000000e+01 3.52441432e-02 3.00000000e+00]
[ 2.10000000e+01 2.40000000e+01 1.38662986e-01 1.00000000e+01]
[ 1.70000000e+01 2.30000000e+01 6.99056531e-01 4.00000000e+00]
[ 2.50000000e+01 2.60000000e+01 1.92543748e+00 1.40000000e+01]
[ -1.00000000e+00 2.70000000e+01 inf 1.50000000e+01]]
To solve this problem, I checked linkage() function and inside it I needed to check _hierarchy.linkage() method. I use pycharm text editor and when I asked for "linkage" source code, it opened up a python file namely "_hierarchy.py" inside the directory like the following:
.PyCharm40/system/python_stubs/-1247972723/scipy/cluster/_hierarchy.py
This python file doesn't have any definition for all included functions.
I am wondering what is the correct source of this function to revise it or is there another way to solve this problem.
I would be appreciated for your helps and hints.
You have a zero vector 0 0 in your data set. For such data, cosine distance is undefined, so you are using an inappropriate distance function!
This is a definition gap that cannot be trivially closed. inf is as incorrect as 0. The distance to 0 0 with cosine cannot be defined without contraditions. You must not use cosine on such data.
Back to your actual question: _hierarchy is a Cython module. It is not pure python, but it is compiled to native code. You can easily see the source code on Github:
https://github.com/scipy/scipy/blob/master/scipy/cluster/_hierarchy.pyx

Resources