I'm getting an error in the loop on line 36 and 68, and I don't know why - alpha-beta-pruning

I'm making a chess engine using alpha-beta pruning for a project, and here is my code. I'm getting an error on line 36 and 68,and I'm not sure how or why. Please help me. Thank you for your answers in advance.
import chess
def evaluate() :
if board.is_checkmate() :
if board.turn :
return -9999
else :
return 9999
if board.is_stalemate() :
return 0
if board.is_insufficient_material() :
return 0
wp = len(board.pieces(chess.PAWN, chess.WHITE))
bp = len(board.pieces(chess.PAWN, chess.BLACK))
wn = len(board.pieces(chess.KNIGHT, chess.WHITE))
bn = len(board.pieces(chess.KNIGHT, chess.BLACK))
wb = len(board.pieces(chess.BISHOP, chess.WHITE))
bb = len(board.pieces(chess.BISHOP, chess.BLACK))
wr = len(board.pieces(chess.ROOK, chess.WHITE))
br = len(board.pieces(chess.ROOK, chess.BLACK))
wq = len(board.pieces(chess.QUEEN, chess.WHITE))
bq = len(board.pieces(chess.QUEEN, chess.BLACK))
material = 100 * (wp - bp) + 320 * (wn - bn) + 330 * (wb - bb) + 500 * (wr - br) + 900 * (wq - bq)
return material
def alphabeta(position, depth_, alpha = float('inf'), beta= -float('inf')):
"""Returns [eval, best move] for the position at the given depth"""
if depth_ == 0 or position.is_game_over():
return [position.evaluate(), None]
else:
if position.turn == chess.WHITE:
best_move = None
for _move in [position.legal_moves]:
new_position = position.push(_move)
score, move_ = alphabeta(new_position, depth_ - 1, alpha, beta)
if score > alpha: # white maximizes their score
alpha = score
best_move = _move
if alpha >= beta: # alpha-beta cutoff
break
return [alpha, best_move]
else:
best_move = None
for move_ in position.legal_moves:
new_position = position.push(move_)
score, move_ = alphabeta(new_position, depth_ - 1, alpha, beta)
if score < beta: # black minimizes their score
beta = score
best_move = move_
if alpha >= beta: # alpha-beta cutoff
break
return [alpha, best_move]
fen_ = input('Enter fen: ')
board = chess.Board(fen_)
_depth = int(input('Enter depth: '))
engine = alphabeta(board,_depth)
print(board,engine[0],engine[1])
board.push(engine[0])
Also, there are errors on line 2181 and 3602

So, here is the corrected code (I explain what was wrong just after it) :
import chess
def evaluate(position) :
if board.is_checkmate() :
if board.turn :
return -9999
else :
return 9999
if board.is_stalemate() :
return 0
if board.is_insufficient_material() :
return 0
wp = len(board.pieces(chess.PAWN, chess.WHITE))
bp = len(board.pieces(chess.PAWN, chess.BLACK))
wn = len(board.pieces(chess.KNIGHT, chess.WHITE))
bn = len(board.pieces(chess.KNIGHT, chess.BLACK))
wb = len(board.pieces(chess.BISHOP, chess.WHITE))
bb = len(board.pieces(chess.BISHOP, chess.BLACK))
wr = len(board.pieces(chess.ROOK, chess.WHITE))
br = len(board.pieces(chess.ROOK, chess.BLACK))
wq = len(board.pieces(chess.QUEEN, chess.WHITE))
bq = len(board.pieces(chess.QUEEN, chess.BLACK))
material = 100 * (wp - bp) + 320 * (wn - bn) + 330 * (wb - bb) + 500 * (wr - br) + 900 * (wq - bq)
return material if position.turn else -material
def alphabeta(position, depth_, alpha=-float('inf'), beta=float('inf')):
"""Returns [eval, best move] for the position at the given depth"""
if depth_ == 0 or position.is_game_over():
return [evaluate(position), None]
best_move = None
for _move in position.legal_moves:
position.push(_move)
score, move_ = alphabeta(position, depth_ - 1, -beta, -alpha)
score = -score
position.pop()
if score > alpha: # player maximizes his score
alpha = score
best_move = _move
if alpha >= beta: # alpha-beta cutoff
break
return [alpha, best_move]
fen_ = input('Enter fen: ')
board = chess.Board(fen_)
_depth = int(input('Enter depth: '))
engine = alphabeta(board,_depth)
print(board,engine[0],engine[1])
board.push(engine[1])
So, first of all, alpha-beta is never used in chess programming like this. Instead, we prefer the negaMax version of it, and here is the pseudo-code :
int alphaBeta( int alpha, int beta, int depthleft ) {
if( depthleft == 0 ) return quiesce( alpha, beta );
for ( all moves) {
score = -alphaBeta( -beta, -alpha, depthleft - 1 );
if( score >= beta )
return beta; // fail hard beta-cutoff
if( score > alpha )
alpha = score; // alpha acts like max in MiniMax
}
return alpha;
}
Then, in this line of code (new_position = position.push(_move)), you were doing something strange. The move is made on the board, and you never undo the move. And you create a variable like if the chess.Board.push method will return a chess.Board type. This is false. Instead, tou should use something like this :
position.push(_move)
... some code ...
position.pop() # undo the move
And the last bug is here (board.push(engine[0])). This is not making a move on the board, but your alpha-beta evaluation. So the good index is the following : board.push(engine[1]).
This gives a functional code.
BUT, to make tour chess engine better, an evaluation as simple as your is not enough. Material score of the board is not a really good indicator of who is winning. So, I recommend you to implement PSQT and a Quiescent Search.

Related

Electric Vehicle Routing Path Simulation Giving Wrong Output

My electric vehicle charging algorithm aims at returning the shortest path length by a charging drone. The drone will meet the electric vehicle at a rendezvous point and charge the vehicle. The vehicle selection is done based on the charging urgency of the vehicles. The total path length is calculated when all vehicles are charged based on their urgency. The pseudocode is:
EDF(List_Req)
SunchrgEV = List_Req
Pathlen = 0
Nchgreq = 0
while(SunchrgEV = 0):
U obtains the updated location of all ei element of SunchrgEV via message exchange
S'gcs = set of GCS element of Sgcs
Gr = min {Eucleadian distance between Loc(ei) and Loc(Gj)} //The closest GCS
ex = min [{Eucleadian distance between Loc(ei) and Loc(Gr)}/ResidDist(ei)] // most urgent EV
if SoC(U)<RqB2D(ex) // then U(the drone) itself needs charge and it reports its termination to the server
// the server dispatches another U(drone) to charge the remaining EVs (ei)
Gr = min {Eucleadian distance between Loc(U) and Loc(Gj)} //The closest GCS to U where U goes to full charge
end if
t = tcurrent
// Finding rendezvous point where ex and U meets
RdvLoc = FindRdvLoc(ex, Loc(ex), Loc(U),t)
if RdvLoc is out of service area of U then
Report2server(ex,'Outofservicearea')
continue
end if
Pathlen += Dist2(Loc(U),RdvLoc)
U sends RdvLoc to ex and flies to RdvLoc
if U reaches ex in ChgRng(U) then
{Pathlen += Charge(ex, RdvLoc)
Nchgreq ++
}
else
Report2server(ex, 'Outofservicearea')
endif
SunchrgEV -= ex
do
Update (Loc(U))
Gr = min {Eucleadian distance between Loc(U) and Loc(Gj)} //U returns to Gr
Pathlen += Dist2(Loc(U), Loc(Gr))
return Pathlen and Nchgreq
//Compute the expected location where U meets e on Map(e)
def FindRdvLoc(e,Le,Lu,t):
Compute X = (x,y) on Rte(e,t) so that {Dist(e,X)/Speed(e,t)} = {Dist(U, X-ChgRng(U))/Speed(U)}
return X
def Charge(ei, RdvLoc):
U starts to charge ei and follows Rte(ei,t)
eLoc = FindLoc(Map(ei), Loc(ei), ChgTime(ei), Speed(ei,t))
return DistM(Map(ei), RdvLoc, eLoc)
The code that I have written so far gives the same output regardless of the input. The code is:
import math
import sys
from typing import List, Any
class Location:
x = 0
y = 0
def __init__(self, x, y):
self.x = x
self.y = y
class Result:
path_len: int = 0
n_charge_request: int = 0
def __init__(self, path_len, n_charge_request):
self.path_len = path_len
self.n_charge_request = n_charge_request
def to_string(self):
return "path len : " + str(self.path_len) + ", n_charge_request : " + str(self.n_charge_request)
def print(self):
print(self.to_string())
#SGCs
class ChargingStation:
def __init__(self, location: Location, fuel):
self.location = location
self.fuel = fuel
#EVs(ex)
class Vehicle:
location = Location(0, 0)
fuel = 1
isNeedEmergencyFuel = False
per_fuel_distance_travel = 2
def __init__(self, id, location, fuel):
self.id = id
self.location = location
self.fuel = fuel
# Resid (U)
def residual_distance(self):
return self.fuel * self.per_fuel_distance_travel # assuming each watt or kw fuel will yield to 2 killos or milies
# RqB2D
def requested_amount_of_charge(self, nearest_charge_station_location: Location) -> int:
distance = get_distance(self.location, nearest_charge_station_location)
cover = self.fuel * self.per_fuel_distance_travel
diff = math.fabs(distance - cover)
if diff > 0:
needed_fuel = diff / self.per_fuel_distance_travel + 2
return needed_fuel
return 0
# U(i)
class Drone:
location = Location(0, 0)
fuel = 0
isFlying = False
isSendForEvCharge = False
per_fuel_distance_travel = 20
serving_radius = 15
G = [
ChargingStation(Location(20, 10), 50),
ChargingStation(Location(50, 80), 40),
ChargingStation(Location(30, 30), 60)
]
def __init__(self, location, fuel):
self.location = location
self.fuel = fuel
# Resid (U)
def residual_distance(self):
return self.fuel * self.per_fuel_distance_travel # assuming each unit of fuel will yield to 2 kilos or miles
def is_out_of_service_zone(self, vehicle_location: Location): # ----->
distance = get_distance(self.location, vehicle_location)
return distance > self.serving_radius
#staticmethod
def get_distance(from_location, to_location):
x_dis = to_location.x - from_location.x
y_dis = to_location.y - from_location.y
x_dis_sqr = math.pow(x_dis, 2)
y_dis_sqr = math.pow(y_dis, 2)
final_dis_sum = x_dis_sqr + y_dis_sqr
final = math.sqrt(final_dis_sum)
return final
class EDFGenerator:
min_fuel = 50
charging_stations_for_drones = [ChargingStation(Location(2, 10), 80), ChargingStation(Location(2, 10), 50),
ChargingStation(Location(2, 10), 100)]
list_waiting_drones = [Drone(Location(5, 10), 50), Drone(Location(2, 10), 50), Drone(Location(2, 10), 50)]
list_sent_drones = []
list_charging_drones = []
def __init__(self):
pass
def rdv_loc(self, ei: Vehicle, drone_location: Location, t: int) -> Location | None:
needed_fuel = ei.requested_amount_of_charge(drone_location)
nearest_charge_station = self.get_nearest_charge_station(ei.location, self.charging_stations_for_drones)
needed_fuel_at_nearest_station = needed_fuel / ei.per_fuel_distance_travel
if nearest_charge_station.fuel < needed_fuel_at_nearest_station:
return None
else:
return nearest_charge_station.location
#staticmethod
def get_nearest_charge_station(from_location: Location, list_of_stations: List[ChargingStation]) -> ChargingStation:
nearest = list_of_stations[0]
min_distance = get_distance(from_location, nearest.location)
for station in list_of_stations:
dis = get_distance(from_location, station.location)
if min_distance > dis:
min_distance = dis
nearest = station
return nearest
def NChgReq(self) -> int:
charging_requesters = 0
for drone in self.list_waiting_drones:
if drone.isNeedEmergencyFuel:
charging_requesters += 1
return charging_requesters
def send_drone_to_charge(self, drone: Drone): # ----->
if drone.fuel < self.min_fuel:
nearest_station = self.get_nearest_charge_station(
drone.location,
self.charging_stations_for_drones)
self.list_sent_drones.append(drone)
self.list_waiting_drones.remove(drone)
drone.isSendForEvCharge = True
# send the drone to the charging station here
print(f"Drone {drone} sent to charging station at {nearest_station}")
def check_fuel_and_send_to_charge(self): # ----->
for drone in self.list_waiting_drones:
self.send_drone_to_charge(drone)
def get_drone(self, max_try=4) -> Drone:
if max_try <= 0:
print("max try failed for get_drone")
return None
# take one time from list_waiting_drones ------->
# add to list_sent_drones
# send or return the taken item
if len(self.list_waiting_drones) == 0:
return None
else:
# lastOne = self.list_waiting_drones.pop()
self.last_one = self.list_waiting_drones.pop()
if self.last_one.fuel < self.min_fuel:
print("low fuel failed to get_drone, retry")
self.list_waiting_drones.insert(0, self.last_one)
return max_try - 1
self.list_sent_drones.append(self.last_one)
return self.last_one
def get_closest_location_from_sending_server_to_e_vehicle(self, current_vechicle_location):
min_distance = sys.maxsize
for current_server in self.charging_stations_for_drones:
distance = get_distance(current_vechicle_location, current_server.location)
if min_distance > distance:
min_distance = distance
return min_distance
def get_most_urgent_electric_vehicle(self, closest_distance_between_server_ev: int,
all_uncharged_electric_vehicles: List[Vehicle]) -> Vehicle:
final_ev = None
min_distance = sys.maxsize
for ev in all_uncharged_electric_vehicles:
g_r = self.get_closest_location_from_sending_server_to_e_vehicle(ev.location)
residual_distance = ev.residual_distance()
eq = g_r / residual_distance
if min_distance > eq:
min_distance = eq
final_ev = ev
return final_ev
def reports_termination_to_server(self, drone: Drone):
self.list_charging_drones.append(drone)
self.charge_all_waiting_drones()
def charge_all_waiting_drones(self):
# assuming the environment is not async but synchronous
for drone in self.list_charging_drones:
drone.fuel = 100
self.list_waiting_drones.insert(0, drone)
self.list_charging_drones.clear()
#staticmethod
def report_to_server(ev: Vehicle, message):
print(ev.id + " - " + message)
def get_edf(self, list_req: List[Vehicle]) -> Result:
s_uncharged_electric_vehicles = list_req
path_len = 0
n_charge_req = 0
while len(s_uncharged_electric_vehicles) > 0:
print("uncharged_ev : " + str(len(s_uncharged_electric_vehicles)))
current_uncharged_ev = s_uncharged_electric_vehicles[0]
u = self.get_drone()
if u is None:
print("no drones from any station or with min charge")
return Result(path_len, n_charge_req)
# current_uncharged_ev.location aka e----->
e = current_uncharged_ev.location
# confusion SGCS what would be the SET
g_r: int = self.get_closest_location_from_sending_server_to_e_vehicle(
e) # closest vehicle from sending location
# confusion regarding dis (loc(e), g_r) , g_r already contains the distance
e_x = self.get_most_urgent_electric_vehicle(g_r, s_uncharged_electric_vehicles)
drone_residual_distance = u.residual_distance() # Resid (U)
ev_requested_amount_of_charge = e_x.requested_amount_of_charge(u.location) # RqB2D
if drone_residual_distance < ev_requested_amount_of_charge:
self.reports_termination_to_server(u)
u = self.get_drone()
g_r_2 = self.get_closest_location_from_sending_server_to_e_vehicle(
u.location) # closest vehicle from sending location
self.reports_termination_to_server(u) # sends back the drone for charging
# ?? t is something confusing, how to include t into the equation??
rdv_loc = self.rdv_loc(e_x, u.location, 0) # t should be random
if rdv_loc is None:
self.report_to_server(e_x, "rdv location generate failed")
continue
if u.is_out_of_service_zone(rdv_loc):
self.report_to_server(e_x, "Out of Service")
continue
path_len += get_distance(u.location, rdv_loc)
u.location = rdv_loc
e_x.location = rdv_loc
# list_1 = filter(lambda x: x[3] <= 0.3 and x[2] < 5, list_1)
s_uncharged_electric_vehicles.remove(e_x)
n_charge_req = n_charge_req + 1
return Result(path_len, n_charge_req)
if __name__ == '__main__':
edf_runner = EDFGenerator()
un_charged_vehicles = [
Vehicle(1, Location(1, 1), 2),
Vehicle(2, Location(5, 10), 16),
Vehicle(3, Location(6, 10), 13),
Vehicle(4, Location(8, 11), 22),
Vehicle(5, Location(5, 6), 35),
]
edf_result = edf_runner.get_edf(un_charged_vehicles)
print("first_path_len - edf : ")
edf_result.print()
Where am I wrong? What needs fix?

Particle Collision Simulation Python

I'm trying to create a relatively simple particle simulation, which should account for gravity, drag, the collision with other particles (inelastic collision) and the collision with walls (perfectly elastic). I got the gravity and drag part working with the velocity Verlet algorithm but its as of right now not capable of setting the particles to an equilibrium state. Furthermore if I add multiple particles they sometimes climb on each other which is due (as I believe) to them still having very small velocity components which asymptotically drives to zero. I tried to cut off the velocity of the particles if the energy of a particle gets sufficiently small but it wouldn't look realistic. Could somebody maybe point out some advice how to fix these issues. I got a particle Object:
import pygame
import random
import numpy as np
import operator
from itertools import combinations
class Particle:
def __init__(self):
self.mass = 10
self.radius = random.randint(10, 50)
self.width, self.height = 700, 500
self.pos = np.array((self.width/2, self.height/2))
self.v = np.array((0.0, 0.0))
self.acc = np.array((0.0, 0.0))
self.bounce = 0.95
I use the Verlet-Integration to account for gravity and drag forces:
def update(self, ball, dt):
new_pos = np.array((ball.pos[0] + ball.v[0]*dt + ball.acc[0]*(dt*dt*0.5), ball.pos[1] + ball.v[1]*dt + ball.acc[1]*(dt*dt*0.5)))
new_acc = np.array((self.apply_forces(ball))) # only needed if acceleration is not constant
new_v = np.array((ball.v[0] + (ball.acc[0]+new_acc[0])*(dt*0.5), ball.v[1] + (ball.acc[1]+new_acc[1])*(dt*0.5)))
ball.pos = new_pos;
ball.v = new_v;
ball.acc = new_acc;
def apply_forces(self, ball):
grav_acc = [0.0, 9.81]
drag_force = [0.5 * self.drag * (ball.v[0] * abs(ball.v[0])), 0.5 * self.drag * (ball.v[1] * abs(ball.v[1]))] #D = 0.5 * (rho * C * Area * vel^2)
drag_acc = [drag_force[0] / ball.mass, drag_force[1] / ball.mass] # a = F/m
return (-drag_acc[0]),(grav_acc[1] - drag_acc[1])
And here I calculate the collision part:
def collision(self):
pairs = combinations(range(len(self.ball_list)), 2)
for i,j in pairs:
part1 = self.ball_list[i]
part2 = self.ball_list[j]
distance = list(map(operator.sub, self.ball_list[i].pos, self.ball_list[j].pos))
if np.hypot(*distance) < self.ball_list[i].radius + self.ball_list[j].radius:
distance = part1.pos - part2.pos
rad = part1.radius + part2.radius
slength = (part1.pos[0] - part2.pos[0])**2 + (part1.pos[1] - part2.pos[1])**2
length = np.hypot(*distance)
factor = (length-rad)/length;
x = part1.pos[0] - part2.pos[0]
y = part1.pos[1] - part2.pos[1]
part1.pos[0] -= x*factor*0.5
part1.pos[1] -= y*factor*0.5
part2.pos[0] += x*factor*0.5
part2.pos[1] += y*factor*0.5
u1 = (part1.bounce*(x*part1.v[0]+y*part1.v[1]))/slength
u2 = (part2.bounce*(x*part2.v[0]+y*part2.v[1]))/slength
part1.v[0] = u2*x-u1*x
part1.v[1] = u1*x-u2*x
part2.v[0] = u2*y-u1*y
part2.v[1] = u1*y-u2*y
def check_boundaries(self, ball):
if ball.pos[0] + ball.radius > self.width:
ball.v[0] *= -ball.bounce
ball.pos[0] = self.width - ball.radius
if ball.pos[0] < ball.radius:
ball.v[0] *= -ball.bounce
ball.pos[0] = ball.radius
if ball.pos[1] + ball.radius > self.height:
self.friction = True
ball.v[1] *= -ball.bounce
ball.pos[1] = self.height - ball.radius
elif ball.pos[1] < ball.radius:
ball.v[1] *= -ball.bounce
ball.pos[1] = ball.radius

Any suggestions to make python code faster?

My present code takes too much time to execute say N=100000 values. Last time I tried it took around 4 hrs. Which is too much computing time. If someone can suggest anything to make the code a little faster?
def gen_chain(N):
coordinates = np.loadtxt('saw.txt', skiprows=0)
return coordinates
def lj(rij2):
sig_by_r6 = np.power(sigma / rij2, 3)
sig_by_r12 = np.power(sig_by_r6, 2)
lje = 4.0 * epsilon * (sig_by_r12 - sig_by_r6)
return lje
def fene(rij2):
return (-0.5 * K * R**2 * np.log(1 - ((np.sqrt(rij2) - r0)**2 / R**2)))
def total_energy(coord):
# Non-bonded
e_nb = 0
for i in range(N):
for j in range(i - 1):
ri = coord[i]
rj = coord[j]
rij = ri - rj
rij2 = np.dot(rij, rij)
if (np.sqrt(rij2) < rcutoff):
e_nb += lj(rij2)
# Bonded
e_bond = 0
for i in range(1, N):
ri = coord[i]
rj = coord[i - 1]
rij = ri - rj
rij2 = np.dot(rij, rij)
e_bond += fene(rij2)
return e_nb + e_bond
def move(coord):
trial = np.ndarray.copy(coord)
for i in range(N):
delta = (2.0 * np.random.rand(3) - 1) * max_delta
trial[i] += delta
return trial
def accept(delta_e):
beta = 1.0 / T
if delta_e <= 0.0:
return True
random_number = np.random.rand(1)
p_acc = np.exp(-beta * delta_e)
if random_number < p_acc:
return True
return False
if __name__ == "__main__":
# FENE parameters
K = 40
R = 0.3
r0 = 0.7
# LJ parameters
sigma = 0.624
epsilon = 1.0
# MC parameters
N = 100 # number of particles
rcutoff = 2.5 * sigma
max_delta = 0.01
n_steps = 100000
T = 0.5
coord = gen_chain(N)
energy_current = total_energy(coord)
traj = open('traj.xyz', 'w')
for step in range(n_steps):
if step % 1000 == 0:
traj.write(str(N) + '\n\n')
for i in range(N):
traj.write("C %10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
print(step, energy_current)
coord_trial = move(coord)
energy_trial = total_energy(coord_trial)
delta_e = energy_trial - energy_current
if accept(delta_e):
coord = coord_trial
energy_current = energy_trial
traj.close()
I know it cannot be compared to C/C++.Therefore, please don't suggest to use any other language. I also welcome suggestions regarding some unnecessary objects.

Monte Carlo simulation of a system of polymer chain

I want to perform Monte Carlo simulation to the particles which are interacting via Lennard-Jones potential + FENE potential. I'm getting negative values in the FENE potential which have the log value in it. The error is "RuntimeWarning: invalid value encountered in log return (-0.5 * K * R**2 * np.log(1-((np.sqrt(rij2) - r0) / R)**2))" The FENE potential is given by:
import numpy as np
def gen_chain(N, R0):
x = np.linspace(1, (N-1)*0.8*R0, num=N)
y = np.zeros(N)
z = np.zeros(N)
return np.column_stack((x, y, z))
def lj(rij2):
sig_by_r6 = np.power(sigma/rij2, 3)
sig_by_r12 = np.power(sig_by_r6, 2)
lje = 4.0 * epsilon * (sig_by_r12 - sig_by_r6)
return lje
def fene(rij2):
return (-0.5 * K * R**2 * np.log(1-((np.sqrt(rij2) - r0) / R)**2))
def total_energy(coord):
# Non-bonded
e_nb = 0
for i in range(N):
for j in range(i-1):
ri = coord[i]
rj = coord[j]
rij = ri - rj
rij2 = np.dot(rij, rij)
if (np.sqrt(rij2) < rcutoff):
e_nb += lj(rij2)
# Bonded
e_bond = 0
for i in range(1, N):
ri = coord[i]
rj = coord[i-1]
rij = ri - rj
rij2 = np.dot(rij, rij)
e_bond += fene(rij2)
return e_nb + e_bond
def move(coord):
trial = np.ndarray.copy(coord)
for i in range(N):
delta = (2.0 * np.random.rand(3) - 1) * max_delta
trial[i] += delta
return trial
def accept(delta_e):
beta = 1.0/T
if delta_e <= 0.0:
return True
random_number = np.random.rand(1)
p_acc = np.exp(-beta*delta_e)
if random_number < p_acc:
return True
return False
if __name__ == "__main__":
# FENE parameters
K = 40
R = 0.3
r0 = 0.7
# LJ parameters
sigma = r0/0.33
epsilon = 1.0
# MC parameters
N = 50 # number of particles
rcutoff = 2.5*sigma
max_delta = 0.01
n_steps = 10000000
T = 0.5
coord = gen_chain(N, R)
energy_current = total_energy(coord)
traj = open('traj.xyz', 'w')
for step in range(n_steps):
if step % 1000 == 0:
traj.write(str(N) + '\n\n')
for i in range(N):
traj.write("C %10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
print(step, energy_current)
coord_trial = move(coord)
energy_trial = total_energy(coord_trial)
delta_e = energy_trial - energy_current
if accept(delta_e):
coord = coord_trial
energy_current = energy_trial
traj.close()
The problem is that calculating rij2 = np.dot(rij, rij) in total energy with the constant values you use is always a very small number. Looking at the expression inside the log used to calculate FENE, np.log(1-((np.sqrt(rij2) - r0) / R)**2), I first noticed that you're taking the square root of rij2 which is not consistent with the formula you provided.
Secondly, notice that ((rij2 - r0) / R)**2 is the same as ((r0 - rij2) / R)**2, since the sign gets lost when squaring. Because rij2 is very small (already in the first iteration -- I checked by printing the values), this will be more or less equal to ((r0 - 0.05)/R)**2 which will be a number bigger than 1. Once you subtract this value from 1 in the log expression, 1-((np.sqrt(rij2) - r0) / R)**2 will be equal to np.nan (standing for "Not A Number"). This will propagate through all the function calls (for example, calling energy_trial = total_energy(coord_trial) will effectively set energy_trial to np.nan), until an error will be raised by some function.
Maybe you could do something with np.isnan() call, documented here. Moreover, you should check how you iterate through the coord (there's some inconsistencies throughout the code) -- I suggest you check the code review community as well.

Multivariate Natural Evolution Strategy

I've just started to play around with Reinforcement Learning these days and I found the Natural Evolution Strategy, I kind of understand how it works, but I'm very new with Python and I found this code which basically implements the NES algorithm
https://github.com/huseinzol05/Stock-Prediction-Models/blob/master/agent/updated-NES-google.ipynb
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
import random
sns.set()
# CSV containing the TSLA stock predictions in the form of
# [Date, Open, High, Low, Close, Adj Close, Volume] from
# Yahoo! Finance
df = pd.read_csv('TSLA.csv')
df.head()
def get_state(data, t, n):
d = t - n + 1
block = data[d : t + 1] if d >= 0 else -d * [data[0]] + data[0 : t + 1]
res = []
for i in range(n - 1):
res.append(block[i + 1] - block[i])
return np.array([res])
close = df.Close.values.tolist()
window_size = 30
skip = 1
l = len(close) - 1
class Deep_Evolution_Strategy:
inputs = None
def __init__(
self, weights, reward_function, population_size, sigma, learning_rate
):
self.weights = weights
self.reward_function = reward_function
self.population_size = population_size
self.sigma = sigma
self.learning_rate = learning_rate
def _get_weight_from_population(self, weights, population):
weights_population = []
for index, i in enumerate(population):
jittered = self.sigma * i
weights_population.append(weights[index] + jittered)
return weights_population
def get_weights(self):
return self.weights
def train(self, epoch = 100, print_every = 1):
lasttime = time.time()
for i in range(epoch):
population = []
rewards = np.zeros(self.population_size)
for k in range(self.population_size):
x = []
for w in self.weights:
x.append(np.random.randn(*w.shape))
population.append(x)
for k in range(self.population_size):
weights_population = self._get_weight_from_population(self.weights, population[k])
rewards[k] = self.reward_function(weights_population)
rewards = (rewards - np.mean(rewards)) / np.std(rewards)
for index, w in enumerate(self.weights):
A = np.array([p[index] for p in population])
self.weights[index] = (
w
+ self.learning_rate
/ (self.population_size * self.sigma)
* np.dot(A.T, rewards).T
)
class Model:
def __init__(self, input_size, layer_size, output_size):
self.weights = [
np.random.randn(input_size, layer_size),
np.random.randn(layer_size, output_size),
np.random.randn(layer_size, 1),
np.random.randn(1, layer_size),
]
def predict(self, inputs):
feed = np.dot(inputs, self.weights[0]) + self.weights[-1]
decision = np.dot(feed, self.weights[1])
buy = np.dot(feed, self.weights[2])
return decision, buy
def get_weights(self):
return self.weights
def set_weights(self, weights):
self.weights = weights
class Agent:
POPULATION_SIZE = 15
SIGMA = 0.1
LEARNING_RATE = 0.03
def __init__(self, model, money, max_buy, max_sell):
self.model = model
self.initial_money = money
self.max_buy = max_buy
self.max_sell = max_sell
self.es = Deep_Evolution_Strategy(
self.model.get_weights(),
self.get_reward,
self.POPULATION_SIZE,
self.SIGMA,
self.LEARNING_RATE,
)
def act(self, sequence):
decision, buy = self.model.predict(np.array(sequence))
return np.argmax(decision[0]), int(buy[0])
def get_reward(self, weights):
initial_money = self.initial_money
starting_money = initial_money
self.model.weights = weights
state = get_state(close, 0, window_size + 1)
inventory = []
quantity = 0
for t in range(0, l, skip):
action, buy = self.act(state)
next_state = get_state(close, t + 1, window_size + 1)
if action == 1 and initial_money >= close[t]:
if buy < 0:
buy = 1
if buy > self.max_buy:
buy_units = self.max_buy
else:
buy_units = buy
total_buy = buy_units * close[t]
initial_money -= total_buy
inventory.append(total_buy)
quantity += buy_units
elif action == 2 and len(inventory) > 0:
if quantity > self.max_sell:
sell_units = self.max_sell
else:
sell_units = quantity
quantity -= sell_units
total_sell = sell_units * close[t]
initial_money += total_sell
state = next_state
return ((initial_money - starting_money) / starting_money) * 100
def fit(self, iterations, checkpoint):
self.es.train(iterations, print_every = checkpoint)
def buy(self):
initial_money = self.initial_money
state = get_state(close, 0, window_size + 1)
starting_money = initial_money
states_sell = []
states_buy = []
inventory = []
quantity = 0
for t in range(0, l, skip):
action, buy = self.act(state)
next_state = get_state(close, t + 1, window_size + 1)
if action == 1 and initial_money >= close[t]:
if buy < 0:
buy = 1
if buy > self.max_buy:
buy_units = self.max_buy
else:
buy_units = buy
total_buy = buy_units * close[t]
initial_money -= total_buy
inventory.append(total_buy)
quantity += buy_units
states_buy.append(t)
elif action == 2 and len(inventory) > 0:
bought_price = inventory.pop(0)
if quantity > self.max_sell:
sell_units = self.max_sell
else:
sell_units = quantity
if sell_units < 1:
continue
quantity -= sell_units
total_sell = sell_units * close[t]
initial_money += total_sell
states_sell.append(t)
try:
invest = ((total_sell - bought_price) / bought_price) * 100
except:
invest = 0
state = next_state
invest = ((initial_money - starting_money) / starting_money) * 100
model = Model(window_size, 500, 3)
agent = Agent(model, 10000, 5, 5)
agent.fit(500, 10)
agent.buy()
As you can see, it is being used for stock prediction and it only uses the Close column, but I would like to try it with more parameters, let's say High and Low.
I'm struggling when I need to change it to use this 2 dimensional list. I've tried a simple change:
close = df.loc[:,['Close','Open']].values.tolist()
Which adds one more property at every row of the list. But when I run the code I start to see errors when I execute the agent.fit() call:
agent.fit(iterations = 500, checkpoint = 10)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-225-d97697984016> in <module>()
----> 1 agent.fit(iterations = 500, checkpoint = 10)
<ipython-input-223-35d9fbba5756> in fit(self, iterations, checkpoint)
66
67 def fit(self, iterations, checkpoint):
---> 68 self.es.train(iterations, print_every = checkpoint)
69
70 def buy(self):
<ipython-input-220-84ca345091f4> in train(self, epoch, print_every)
33 self.weights, population[k]
34 )
---> 35 rewards[k] = self.reward_function(weights_population)
36 rewards = (rewards - np.mean(rewards)) / np.std(rewards)
37
<ipython-input-223-35d9fbba5756> in get_reward(self, weights)
36
37 self.model.weights = weights
---> 38 state = get_state(self.close, 0, self.window_size + 1)
39 inventory = []
40 quantity = 0
<ipython-input-219-0df8d8be24a9> in get_state(data, t, n)
4 res = []
5 for i in range(n - 1):
----> 6 res.append(block[i + 1] - block[i])
7 return np.array([res])
TypeError: unsupported operand type(s) for -: 'list' and 'list'
I assume that the first step is that I need to update my Model class to use a different input_size parameter right?
Any help would be appreciated! Thanks

Resources