How to find node voltage by VDR - circuit

how to determine the node voltage Vab using VDR.
Circuit diagram

You should ask questions like this on the electrical engineering Stack Exchange site here: https://electronics.stackexchange.com/
But anyway, since the resistors are all in series you can use the simple series resistance formula to find that the total series resistance is:
Rt = 2.5 + 1.5 + 0.6 + 0.9 + 0.5 = 6 Ohms
This means the current through the circuit is:
I = V/R = 0.36/6 = 0.06 A (60 mA)
The voltage across Vab is Vab = Va - Vb so:
Va = IR = 0.06 * 0.5 = 0.03 V (30 mV)
Vb = IR = 0.06 * (1.5 + 0.6 + 0.9 + 0.5) = 0.06 * 3.5 = 0.21 V (210 mV)
Vab = 0.21 - 0.03 = 0.18 V (180 mV)
This can be shortened to:
Vab = IR = 0.06 * (1.5 + 0.6 + 0.9) = 0.18 (180 mV)
To further your learning I would suggest learning about Ohm's law

Related

Calculating the avg point of students from a txt file and return a dict

I'm quite new to Python and I have been working on this problem for a week, still can't figure this out, pls help.
The txt input file is like this (the first number in each line is the Student ID; Math, Phsc, Chem and Bio each has 4 scores, the rest has 5, separated by ';'):
StudentID, Math, Physics, Chemistry, Biology, Literature, Language, History, Geography
1; 5,7,7,8;5,5,6,6;8,6,7,7;4,8,5,7;7,7,6,7,9;7,5,8,6,7;7,8,8,5,9;5,8,6,8,7
2; 8,6,8,6;5,5,8,4;4,9,9,7;4,9,3,4;6,7,7,7,4;8,9,6,7,5;5,7,7,9,6;6,6,4,4,7
3; 5,8,9,8;7,8,8,7;6,6,7,6;5,7,9,7;6,3,5,8,8;5,6,6,6,8;7,7,6,6,7;8,5,3,6,4
4; 7,9,9,8;7,9,7,6;10,7,6,7;7,9,8,7;6,8,8,5,7;8,6,6,4,8;7,5,8,6,7;7,6,8,6,8
5; 9,7,4,6;4,6,5,5;7,5,6,7;6,9,7,6;7,9,7,6,6;6,7,7,8,8;7,9,6,8,6;8,6,8,8,5
6; 6,7,7,7;4,6,9,7;5,5,7,7;7,6,5,7;7,9,7,8,7;8,7,7,8,9;9,9,8,8,9;8,7,9,7,5
Math, Phsc, Chem and Bio have 4 weights for each score: 5%, 10%, 15%, 70%, which means, for example, the avg point of Math of Student 1 = 5x5% + 7x10% + 7x15% + 8x70%
Litr, Lang, Hist and Geo has 5 weights: 5%, 10%, 10%, 15%, 60%
Requirment:
Calculate the avg point of each student and output to a dict like this:
{‘Student 1’: {‘Math’: 9.00; ‘Physics’: 8.55, …}, ‘Student 2’: {…‘History’: 9.00; ‘Geography’: 8.55}}
Thank you.
Considering that the script.py and your text file es student.txt are at the same path (directory):
final_dict = {}
with open("student.txt", "r") as f:
for idx, l in enumerate(f.readlines()):
if l != "\n":
if idx == 0:
l = l.replace("\n", "")
header = l.split(", ")[1:]
else:
final_dict.update({f"Student {l[0]}": {}})
marks = l.split("; ")[1].replace("\n", "").split(";")
for i, mark in enumerate(marks):
current_subject_int_marks = tuple(map(int, mark.split(",")))
len_marks = len(current_subject_int_marks)
if len_marks < 5:
avr = (
current_subject_int_marks[0] * 0.05
+ current_subject_int_marks[1] * 0.10
+ current_subject_int_marks[2] * 0.15
+ current_subject_int_marks[3] * 0.70
)
else:
avr = (
current_subject_int_marks[0] * 0.05
+ current_subject_int_marks[1] * 0.10
+ current_subject_int_marks[2] * 0.10
+ current_subject_int_marks[3] * 0.15
+ current_subject_int_marks[4] * 0.60
)
final_dict[f"Student {l[0]}"].update({header[i]: avr})
print(final_dict)

Where is my code hanging (in an infinite loop)?

I am new to Python and trying to get this script to run, but it seems to be hanging in an infinite loop. When I use ctrl+c to stop it, it is always on line 103.
vs = 20.05 * np.sqrt(Tb + Lb * (y - y0)) # m/s speed of sound as a function of temperature
I am used to MatLab (from school) and the editor it has. I ran into issues earlier with the encoding for this code. Any suggestions on a (free) editor? I am currently using JEdit and/or Notepad.
Here is the full script:
#!/usr/bin/env python
# -*- coding: ANSI -*-
import numpy as np
from math import *
from astropy.table import Table
import matplotlib.pyplot as plt
from hanging_threads import start_monitoring#test for code hanging
start_monitoring(seconds_frozen=10, test_interval=100)
"""Initial Conditions and Inputs"""
d = 154.71/1000 # diameter of bullet (in meters)
m = 46.7 # mass of bullet ( in kg)
K3 = 0.87*0.3735 # drag coefficient at supersonic speed
Cd1 = 0.87*0.108 #drag coefficient at subsonic speed
v0 = 802 # muzzle velocity in m/sec
dt = 0.01 # timestep in seconds
"""coriolis inputs"""
L = 90*np.pi/180 # radians - latitude of firing site
AZ = 90*np.pi/180 # radians - azimuth angle of fire measured clockwise from North
omega = 0.0000727 #rad/s rotation of the earth
"""wind inputs"""
wx = 0 # m/s
wz = 0 # m/s
"""initializing variables"""
vx = 0 #initial x velocity
vy = 0 #initial y velocity
vy0 = 0
y_max = 0 #apogee
v = 0
t = 0
x = 0
"""Variable Atmospheric Pressure"""
rho0 = 1.2041 # density of air at sea-level (kg/m^3)
T = 20 #temperature at sea level in celcius
Tb = T + 273.15 # temperature at sea level in Kelvin
Lb = -2/304.8 # temperature lapse rate in K/m (-2degrees/1000ft)- not valid above 36000ft
y = 0 # current altitude
y0 = 0 # initial altitude
g = 9.81 # acceleration due to gravity in m/s/s
M = 0.0289644 #kg/mol # molar mass of air
R = 8.3144598 # J/molK - universal gas constant
# air density as a function of altitude and temperature
rho = rho0 * ((Tb/(Tb+Lb*(y-y0)))**(1+(g*M/(R*Lb))))
"""Variable Speed of Sound"""
vs = 20.05*np.sqrt(Tb +Lb*(y-y0)) # m/s speed of sound as a function of temperature
Area = pi*(d/2)**2 # computing the reference area
phi_incr = 5 #phi0 increment (degrees)
N = 12 # length of table
"""Range table"""
dtype = [('phi0', 'f8'), ('phi_impact', 'f8'), ('x', 'f8'), ('z', 'f8'),('y', 'f8'), ('vx', 'f8'), ('vz', 'f8'), ('vy', 'f8'), ('v', 'f8'),('M', 'f8'), ('t', 'f8')]
table = Table(data=np.zeros(N, dtype=dtype))
"""Calculates entire trajectory for each specified angle"""
for i in range(N):
phi0 = (i + 1) * phi_incr
"""list of initial variables used in while loop"""
t = 0
y = 0
y_max = y
x = 0
z = 0
vx = v0*np.cos(radians(phi0))
vy = v0*np.sin(radians(phi0))
vx_w = 0
vz_w = 0
vz = 0
v = v0
ay = 0
ax = 0
wx = wx
wz = wz
rho = rho0 * ((Tb / (Tb + Lb * (y - y0))) ** (1 + (g * M / (R * Lb))))
vs = 20.05 * np.sqrt(Tb + Lb * (y - y0)) # m/s speed of sound as a function of temperature
ax_c = -2 * omega * ((vz * sin(L)) + vy * cos(L) * sin(AZ))
ay_c = 2 * omega * ((vz * cos(L) * cos(AZ)) + vx_w * cos(L) * sin(AZ))
az_c = -2 * omega * ((vy * cos(L) * cos(AZ)) - vx_w * sin(L))
Mach = v/vs
""" initializing variables for plots"""
t_list = [t]
x_list = [x]
y_list = [y]
vy_list = [vy]
v_list = [v]
phi0_list = [phi0]
Mach_list = [Mach]
while y >= 0:
phi0 = phi0
"""drag calculation with variable density, Temp and sound speed"""
rho = rho0 * ((Tb / (Tb + Lb * (y - y0))) ** (1 + (g * M / (R *Lb))))
vs = 20.05 * np.sqrt(Tb + Lb * (y - y0)) # m/s speed of sound as a function of temperature
Cd3 = K3 / sqrt(v / vs)
Mach = v/vs
"""Determining drag regime"""
if v > 1.2 * vs: #supersonic
Cd = Cd3
elif v < 0.8 * vs: #subsonic
Cd = Cd1
else: #transonic
Cd = ((Cd3 - Cd1)*(v/vs - 0.8)/(0.4)) + Cd1
"""Acceleration due to Coriolis"""
ax_c = -2*omega*((vz_w*sin(L))+ vy*cos(L)*sin(AZ))
ay_c = 2*omega*((vz_w*cos(L)*cos(AZ))+ vx_w*cos(L)*sin(AZ))
az_c = -2*omega*((vy*cos(L)*cos(AZ))- vx_w*sin(L))
"""Total acceleration calcs"""
if vx > 0:
ax = -0.5*rho*((vx-wx)**2)*Cd*Area/m + ax_c
else:
ax = 0
""" Vy before and after peak"""
if vy > 0:
ay = (-0.5 * rho * (vy ** 2) * Cd * Area / m) - g + ay_c
else:
ay = (0.5 * rho * (vy ** 2) * Cd * Area / m) - g + ay_c
az = az_c
vx = vx + ax*dt # vx without wind
# vx_w = vx with drag and no wind + wind
vx_w = vx + 2*wx*(1-(vx/v0*np.cos(radians(phi0))))
vy = vy + ay*dt
vz = vz + az*dt
vz_w = vz + wz*(1-(vx/v0*np.cos(radians(phi0))))
"""projectile velocity"""
v = sqrt(vx_w**2 + vy**2 + vz**2)
"""new x, y, z positions"""
x = x + vx_w*dt
y = y + vy*dt
z = z + vz_w*dt
if y_max <= y:
y_max = y
phi_impact = degrees(atan(vy/vx)) #impact angle in degrees
""" appends selected data for ability to plot"""
t_list.append(t)
x_list.append(x)
y_list.append(y)
vy_list.append(vy)
v_list.append(v)
phi0_list.append(phi0)
Mach_list.append(Mach)
if y < 0:
break
t += dt
"""Range table output"""
table[i] = ('%.f' % phi0, '%.3f' % phi_impact, '%.1f' % x,'%.2f' % z, '%.1f' % y_max, '%.1f' % vx_w,'%.1f' % vz,'%.1f' % vy,'%.1f' % v,'%.2f' %Mach, '%.1f' % t)
""" Plot"""
plt.plot(x_list, y_list, label='%d°' % phi0)#plt.plot(x_list, y_list, label='%d°' % phi0)
plt.title('Altitude versus Range')
plt.ylabel('Altitude (m)')
plt.xlabel('Range (m)')
plt.axis([0, 30000, 0, 15000])
plt.grid(True)
print(table)
legend = plt.legend(title="Firing Angle",loc=0, fontsize='small', fancybox=True)
plt.show()
Thank you in advance
Which Editor Should I Use?
Personally, I prefer VSCode, but Sublime is also pretty popular. If you really want to go barebones, try Vim. All three are completely free.
Code Errors
After scanning your code snippet, it appears that you are caught in an infinite loop, which you enter with the statement while y >= 0. The reason you always get line 103 when you hit Ctrl+C is likely because that takes the longest, making it more likely to land there at any given time.
Note that currently, you can only escape your while loop through this branch:
if y_max <= y:
y_max= y
phi_impact = degrees(atan(vy/vx)) #impact angle in degrees
""" appends selected data for ability to plot"""
t_list.append(t)
x_list.append(x)
y_list.append(y)
vy_list.append(vy)
v_list.append(v)
phi0_list.append(phi0)
Mach_list.append(Mach)
if y < 0:
break
t += dt
This means that if ymax never drops below y, or y never drops below zero, then you will infinitely loop. Granted, I haven't looked at your code in any great depth, but from the surface it appears that y_max is never decremented (meaning it will always be at least equal to y). Furthermore, y is only updated when you do y = y + vy*dt, which will only ever increase y if vy >= 0 (I assume dt is always positive).
Debugging
As #Giacomo Catenazzi suggested, try printing out y and y_max at the top of the while loop and see how they change as your code runs. I suspect they are not decrementing like you expected.

Manually implementing approximation functions

I have a dataset from kaggle of 45,253 rows and a single column for temperature in Kelvin for the city of Detroit. It's mean = 282.97, std = 11, min = 243.48, max = 308.05.
This is the result when plotted as a histogram of 100 bins with density=True:
I am expected to write the following two functions and see whichever one approximates the closest to the histogram:
Like this one here using scipy.stats.norm.pdf:
I generated the above image using:
x = np.linspace(dataset.Detroit.min(), dataset.Detroit.max(), 1001)
P_norm = norm.pdf(x, dataset.Detroit.mean(), dataset.Detroit.std())
plot_pdf_single(x, P_norm)
However, whenever I try to implement any of the two approximation functions all of my values for P_norm result in 0s or infs.
This is what I tried:
P_norm = [(1.0/(np.sqrt(2.0*pi*(std*std))))*np.exp(((-x_i-mu)*(-x_i-mu))/(2.0*(std*std))) for x_i in x]
I also broke it down into parts for a single x_i:
part1 = ((-x[0] - mu)*(-x[0] - mu)) / (2.0*(std * std))
part2 = np.exp(part1)
part3 = 1.0 / (np.sqrt(2.0 * pi * (std*std)))
total = part3*part2
I got the following values:
1145.3913234604413
inf
0.036267480036493875
inf
Since both of the equations use the same formula:
def pdf_approximation(x_i, mu, std):
return (1.0 / (np.sqrt(2.0 * pi * (std*std)))) * np.exp((-(x_i-mu)*(x_i-mu)) / (2.0 * (std*std)))
The code for the first approximation is:
mu = 283
std = 11
P_norm = np.array([pdf_approximation(x_i, mu, std) for x_i in x])
plot_pdf_single(x, P_norm)
The code for the second approximation is:
mu1 = 276
std1 = 6
mu2 = 293
std2 = 6.5
P_norm = np.array([(pdf_approximation(x_i, mu1, std1) * 0.5) + (pdf_approximation(x_i, mu2, std2) * 0.5) for x_i in x])
plot_pdf_single(x, P_norm)

How do I rightly code linear regression with gradient descent in Python?

import pandas as pd
import matplotlib.pyplot as plt
# I'm trying to code the utter basic func of LinearRegression
# from sklearn.linear_model import LinearRegression
dataframe = pd.read_fwf('brain_body.txt') # link given below
x_values = dataframe[['Brain']]
y_values = dataframe[['Body']]
lr = LinearRegression(0.0001, 10) # sending learning_rate and iterations
lr.fit(x_values, y_values)
# commenting out because the values are insane
# plt.scatter(x_values, y_values)
# plt.plot(x_values, clf.predict(x_values))
# plt.show()
Link to brain_body.txt
Here's the class I've written
class LinearRegression:
def __init__(self, learning_rate, iterations):
self.b = 0 # b as in y=mx+b
self.m = 0 # m as in y=mx+b
self.learning_rate = learning_rate
self.iterations = iterations
def get_y(self, x):
return self.m * float(x) + self.b
def step_gradient(self, x_values, y_values):
print()
print("Values before: m =", self.m, " b =", self.b)
m_gradient = 0
b_gradient = 0
N = float(len(x_values.ix[:, 0]))
print('%11s' % "d(m)", '%11s' % "m_gradient", '%11s' % "d(b)", '%11s' % "b_gradient")
for i in range(int(N)):
x = x_values.iloc[i][0]
y = y_values.iloc[i][0]
# EDIT: I missed a * -1 here
# But that wouldn't just fix everything, adjusting learning rate does
pm = (y - self.get_y(x)) * x # partial derivative of m
pb = (y - self.get_y(x)) * -1 # partial derivative of b
m_gradient += pm * 2 / N
b_gradient += pb * 2 / N
print('%11s' % pm, '%11s' % m_gradient, '%11s' % pb, '%11s' % b_gradient)
self.m -= self.learning_rate * m_gradient # adjust current m
self.b -= self.learning_rate * b_gradient # adjust current b
print("Values after: m =", self.m, " b =", self.b)
print()
def fit(self, x_values, y_values): # equivalent to train_model
for i in range(self.iterations):
self.step_gradient(x_values, y_values)
return
def predict(self, x_values): # equivalent to get_output
predictions = []
for x in x_values.ix[:, 0]:
predictions.append(self.get_y(x))
return predictions
I watched Siraj Raval's How to do Linear Regression the right way and followed almost the same way he did. I did learn what partial derivatives and gradient descents are, but I do not what the values of partial derivatives be (or to guess them). And the numbers are going like crazy in the first iteration itself:
Values before: m = 0 b = 0
d(m) m_gradient d(b) b_gradient
150.6325 4.85911290323 -44.5 -1.43548387097
7.44 5.09911290323 -15.5 -1.93548387097
10.935 5.45185483871 -8.1 -2.19677419355
196695.0 6350.45185484 -423.0 -15.8419354839
4341.435 6490.49814516 -119.5 -19.6967741935
3180.9 6593.10782258 -115.0 -23.4064516129
1456.306 6640.08543548 -98.2 -26.5741935484
5.72 6640.26995161 -5.5 -26.7516129032
243.02 6648.10930645 -58.0 -28.6225806452
2.72 6648.19704839 -6.4 -28.8290322581
0.404 6648.21008065 -4.0 -28.9580645161
5.244 6648.37924194 -5.7 -29.1419354839
6.6 6648.59214516 -6.6 -29.3548387097
0.0007 6648.59216774 -0.14 -29.3593548387
0.06 6648.59410323 -1.0 -29.3916129032
37.8 6649.81345806 -10.8 -29.74
24.6 6650.60700645 -12.3 -30.1367741935
10.71 6650.95249032 -6.3 -30.34
11723841.0 384839.371845 -4603.0 -178.823870968
0.0069 384839.372068 -0.3 -178.833548387
78394.9 387368.23981 -419.0 -192.349677419
341255.0 398376.465616 -655.0 -213.478709677
2.7475 398376.554245 -3.5 -213.591612903
1150.0 398413.651019 -115.0 -217.301290323
84.48 398416.376181 -25.6 -218.127096774
1.0 398416.408439 -5.0 -218.288387097
24.675 398417.204406 -17.5 -218.852903226
359720.0 410021.075374 -680.0 -240.788387097
84042.0 412732.107632 -406.0 -253.88516129
27625.0 413623.236665 -325.0 -264.369032258
9.225 413623.534245 -12.3 -264.765806452
81840.0 416263.534245 -1320.0 -307.346451613
38007648.0 1642316.69554 -5712.0 -491.604516129
13.65 1642317.13586 -3.9 -491.730322581
1217.2 1642356.40037 -179.0 -497.504516129
1960.0 1642419.62618 -56.0 -499.310967742
68.85 1642421.84715 -17.0 -499.859354839
0.12 1642421.85102 -1.0 -499.891612903
0.0092 1642421.85132 -0.4 -499.904516129
0.0025 1642421.8514 -0.25 -499.912580645
17.5 1642422.41591 -12.5 -500.315806452
122500.0 1646374.02882 -490.0 -516.122258065
30.25 1646375.00462 -12.1 -516.512580645
9712.5 1646688.31107 -175.0 -522.157741935
15700.0 1647194.76269 -157.0 -527.222258065
22950.4 1647935.09817 -440.0 -541.415806452
1893.725 1647996.18607 -179.5 -547.206129032
1.32 1647996.22865 -2.4 -547.283548387
4860.0 1648153.00285 -81.0 -549.896451613
75.6 1648155.44156 -21.0 -550.573870968
168.0896 1648160.8638 -39.2 -551.838387097
0.532 1648160.88096 -1.9 -551.899677419
0.09 1648160.88387 -1.2 -551.938387097
0.366 1648160.89567 -3.0 -552.03516129
0.01584 1648160.89619 -0.33 -552.045806452
34560.0 1649275.73489 -180.0 -557.852258065
75.0 1649278.15425 -25.0 -558.658709677
27040.0 1650150.41231 -169.0 -564.110322581
2.34 1650150.4878 -2.6 -564.194193548
18.468 1650151.08354 -11.4 -564.561935484
0.26 1650151.09193 -2.5 -564.642580645
213.444 1650157.97722 -50.4 -566.268387097
Values after: m = -165.015797722 b = 0.0566268387097
Values after 10 iteration: m = -1.76899770934e+22 b = 4.21166966984e+18
How do I rightly do LinearRegression from scratch?
This might not be a true answer as it's using R (I could probably figure this out in python, but it would take me longer). I think your issue is in the size of your learning_rate. I'm taking this machine learning class at the moment and so I'm familiar with what you're doing and attempted to implement it myself. Here was my code:
library(ggplot2)
## create test data
data <- data.frame(x = 1:10, y = 1:10)
n <- nrow(data)
## initialize values
m <- 0
b <- 0
alpha <- 0.01
iters <- 100
results <- data.frame(i = 1:iters,
pm = 1:iters,
pb = 1:iters,
m = 1:iters,
b = 1:iters)
for (i in 1:iters) {
y_hat <- (m * data$x) + b
pm <- (1/n) * sum((y_hat - data$y) * data$x)
pb <- (1/n) * sum(y_hat - data$y)
m <- m - (alpha * pm)
b <- b - (alpha * pb)
## uncomment if you want; shows "animated" change
## p <- ggplot(data, aes(x = x, y = y)) + geom_point()
## p <- p + geom_abline(intercept = b, slope = m)
## print(p)
## this turned out to be key for looking at output
results[i, 2:5] <- c(pm, pb, m, b)
}
Now, note the end of results with a big alpha, 0.1:
> tail(results)
i pm pb m b
95 95 -2.864612e+45 -4.114745e+44 2.135518e+44 3.067470e+43
96 96 8.390457e+45 1.205210e+45 -6.254938e+44 -8.984628e+43
97 97 -2.457567e+46 -3.530062e+45 1.832073e+45 2.631600e+44
98 98 7.198218e+46 1.033956e+46 -5.366146e+45 -7.707961e+44
99 99 -2.108360e+47 -3.028460e+46 1.571745e+46 2.257664e+45
100 100 6.175391e+47 8.870365e+46 -4.603646e+46 -6.612702e+45
See how m and b are flip flopping? The learning rate alpha is so high that alpha * derivative are jumping over the minima! In the linked class this is shown in the gradient descent videos, but the concept is the same as this image I found:
Look at results using alpha = 0.01:
> tail(results)
i pm pb m b
95 95 -0.003483741 0.02425319 0.9834438 0.1152615
96 96 -0.003476426 0.02420226 0.9834785 0.1150195
97 97 -0.003469127 0.02415144 0.9835132 0.1147780
98 98 -0.003461842 0.02410073 0.9835478 0.1145370
99 99 -0.003454573 0.02405012 0.9835824 0.1142965
100 100 -0.003447319 0.02399962 0.9836169 0.1140565
It's slow, but we're honing in on m = 1 and b = 0 as expected. With your real data, I had a similar issue. The main code body is the same, with this replacing the data <- data.frame() line at the beginning:
data <- read.table(file = "https://raw.githubusercontent.com/llSourcell/linear_regression_demo/master/brain_body.txt",
header = T, sep = "", stringsAsFactors = F)
names(data) <- c("y", "x")
Everything else is the same, except that I played with alpha and iters. Here's what I found!
## your learning rate; diverging/flip-flopping
## alpha <- 0.0001
> tail(results)
i pm pb m b
95 95 -3.842565e+190 -1.167811e+187 3.801319e+186 1.155276e+183
96 96 3.541406e+192 1.076285e+189 -3.503393e+188 -1.064732e+185
97 97 -3.263851e+194 -9.919315e+190 3.228817e+190 9.812842e+186
98 98 3.008048e+196 9.141894e+192 -2.975760e+192 -9.043766e+188
99 99 -2.772294e+198 -8.425404e+194 2.742537e+194 8.334966e+190
100 100 2.555018e+200 7.765068e+196 -2.527592e+196 -7.681718e+192
## 1/10 as big; still diverging!
## alpha <- 0.00001
> tail(results)
i pm pb m b
95 95 -2.453089e+92 -7.455293e+88 2.189776e+87 6.655047e+83
96 96 2.040052e+93 6.200012e+89 -1.821074e+88 -5.534508e+84
97 97 -1.696559e+94 -5.156089e+90 1.514452e+89 4.602638e+85
98 98 1.410902e+95 4.287936e+91 -1.259457e+90 -3.827672e+86
99 99 -1.173342e+96 -3.565957e+92 1.047397e+91 3.183190e+87
100 100 9.757815e+96 2.965541e+93 -8.710418e+91 -2.647222e+88
## even smaller; that's better!
## alpha <- 0.000001
> tail(results)
i pm pb m b
95 95 -0.01579109 51.95899 0.8856351 -0.004667159
96 96 -0.01579107 51.95894 0.8856352 -0.004719118
97 97 -0.01579106 51.95889 0.8856352 -0.004771077
98 98 -0.01579104 51.95885 0.8856352 -0.004823036
99 99 -0.01579103 51.95880 0.8856352 -0.004874995
100 100 -0.01579102 51.95875 0.8856352 -0.004926953
With this final result, I plotted the results which look reasonable?
p <- ggplot(data, aes(x = x, y = y)) + geom_point()
p <- p + geom_abline(intercept = b, slope = m)
print(p)
So, to wrap up:
I didn't verify/check your python code
I did implement my understanding of gradient descent in R and try with a test to verify behavior
I re-tried this with your actual data to find it appears to work
thus, my recommendation would be to re-try your method with simplified data (sounds like you already might have) and then look at the initial steps with a very small learning rate to see if that fixes it. If not, there may still be something wrong with your math?
Hope that helps!

Gradient Descent diverges, learning rate too high

There is a piece of code below, which does GD step by step but theta is diverging. What could be wrong?
X = arange(100)
Y = 50 + 4*X + uniform(-20, 20, X.shape)
theta = array([0,0])
alpha = 0.001
# one step of GD
theta0 = theta[0] - alpha * sum( theta[0]+theta[1]*x-y for x,y in zip(X,Y))/len(X)
theta1 = theta[1] - alpha * sum((theta[0]+theta[1]*x-y)*x for x,y in zip(X,Y))/len(X)
theta = [theta0, theta1]
Learning rate was too high.
alpha = 0.0001

Resources