Loop results to array - nim-lang

I just started with Nim yesterday. My goal is to calculate a mean of values in the expression b[i]/a[i]. I tried to use math module and built-in function "mean" but apparently it works only with arrays. I don't know how to parse results of my loop into an array (or maybe there is a different solution? Any help appreciated!
var a = #[100.0,102.0,101.0,114.0, 128.0, 130.0, 127.0]
var b = a [1..high(a)]&a[high(a)]
for i in low(a)..high(a):
echo i+1," period ", "= ", (b[i]/a[i])

The important part is to make a new sequence with var c = newSeq[float]() and add values to it with c.add(value), as in the first block here:
var a = #[100.0, 102.0, 101.0, 114.0, 128.0, 130.0, 127.0]
var b = a[1..a.high] & a[a.high]
import math
block: # Iterative with math.mean
var c = newSeq[float]()
for i in a.low..a.high:
c.add(b[i]/a[i])
echo mean(c)
block: # Iterative without math.mean (most efficient)
var myMean = 0.0
for i in a.low..a.high:
myMean += b[i]/a[i]
myMean /= a.len.float
echo myMean
import sequtils
block: # Functionally (not really nim-like)
echo zip(a, b).map(proc(x): float = x.b/x.a).mean

Related

How to compute the average of a string of floats

temp = "75.1,77.7,83.2,82.5,81.0,79.5,85.7"
I am stuck in this assignment and unable to find a relevant answer to help.
I’ve used .split(",") and float()
and I am still stuck here.
temp = "75.1,77.7,83.2,82.5,81.0,79.5,85.7"
li = temp.split(",")
def avr(li):
av = 0
for i in li:
av += float(i)
return av/len(li)
print(avr(li))
You can use sum() to add the elements of a tuple of floats:
temp = "75.1,77.7,83.2,82.5,81.0,79.5,85.7"
def average (s_vals):
vals = tuple ( float(v) for v in s_vals.split(",") )
return sum(vals) / len(vals)
print (average(temp))
Admittedly similar to the answer by #emacsdrivesmenuts (GMTA).
However, opting to use the efficient map function which should scale nicely for larger strings. This approach removes the for loop and explicit float() conversion of each value, and passes these operations to the lower-level (highly optimised) C implementation.
For example:
def mean(s):
vals = tuple(map(float, s.split(',')))
return sum(vals) / len(vals)
Example use:
temp = '75.1,77.7,83.2,82.5,81.0,79.5,85.7'
mean(temp)
>>> 80.67142857142858

Can't evaluate at compile time - NIM

Hi I'm starting to play around with NIM
I get a "can't evaluate at compile time" error on this code:
import strutils
type
Matrix[x, y: static[int], T] = object
data: array[x * y, T]
var n,m: int = 0
proc readFile() =
let f = open("matrix.txt")
defer: f.close()
var graph_size = parseInt(f.readline)
var whole_graph: Matrix[graph_size, graph_size, int]
for line in f.lines:
for field in line.splitWhitespace:
var cell = parseInt(field)
whole_graph[n][m] = cell
m = m + 1
n = n + 1
readFile()
Any help appreciated.
Unless you absolutely positively need array in this scenario while not knowing its size at compile-time, you may want to rather swap to the seq type, whose size does not need to be known at compile-time.
Together with std/enumerate you can even save yourself the hassle of tracking the index with n and m:
import std/[strutils, enumerate]
type Matrix[T] = seq[seq[T]]
proc newZeroIntMatrix(x: int, y: int): Matrix[int] =
result = newSeqOfCap[seq[int]](x)
for i in 0..x-1:
result.add(newSeqOfCap[int](y))
for j in 0..y-1:
result[i].add(0)
proc readFile(): Matrix[int] =
let f = open("matrix.txt")
defer: f.close()
let graph_size = parseInt(f.readline)
var whole_graph = newZeroIntMatrix(graph_size, graph_size)
for rowIndex, line in enumerate(f.lines):
for columnIndex, field in enumerate(line.split):
let cell = parseInt(field)
whole_graph[rowIndex][columnIndex] = cell
result = whole_graph
let myMatrix = readFile()
echo myMatrix.repr
Further things I'd like to point out though are:
array[x * y, T] will not give you a 2D array, but a single array of length x*y. If you want a 2D array, you would most likely want to store this as array[x, array[y, T]]. That is assuming that you know x and y at compile-time, so your variable declaration would look roughly like this: var myMatrix: array[4, array[5, int]]
Your Matrix type has the array in its data field, so trying to access the array with that Matrix type needs to be done accordingly (myMatrix.data[n][m]). That is, unless you define proper []and []= procs for the Matrix type that do exactly that under the hood.

converting dsolve output to solve it for a value in sympy

I have
import sympy as sm
x = sm.symbols('x', cls=sm.Function)
t = sm.symbols('t')
expr = x(t).diff(t) + 0.05*x(t)
sol = sm.dsolve(expr,x(t), ics = {x(0):25})
Now i have the solution as an relational equality. Now i want to solve t for x = 1. I can't do
s = sm.Eq(x,-1) to do sm.solve(s,t) as s returns False
figured it out. It's simply easy. x.rhs can be used to create an equation to use solve for t.
equation = sm.Eq(sol.rhs,1)
sm.solve(equation,t)
gives the result t ~ 64.38

In python, what does an array after a function call mean

def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
What does this [split_time - 30:] mean after the function call moving_average_forecast(series, 30)?
PS: The series is an numpy array.
Thanks
It is simply a shorthand for doing: arr = moving_average_forecast(series, 30) ; moving_avg = arr[split_time - 30:] Thank you #Tomerikoo

How do I speed up this nested for loop in Python?

the function shown below is running quite slow even though I used swifter to call it. Does anyone know how to speed this up? My python knowledge is limited at this point and I would appreciate any help I could get. I tried using map() function but somehow it didnt work for me. I guess the nested for loop makes it rather slow, right?
BR,
Hannes
def polyData(uniqueIds):
for index in range(len(uniqueIds) - 1):
element = uniqueIds[index]
polyData1 = df[df['id'] == element]
poly1 = build_poly(polyData1)
poly1 = poly1.buffer(0)
for secondIndex in range(index + 1, len(uniqueIds)):
otherElement = uniqueIds[secondIndex]
polyData2 = df[df['id'] == otherElement]
poly2 = build_poly(polyData2)
poly2 = poly2.buffer(0)
# Calculate overlap percentage wise
overlap_pct = poly1.intersection(poly2).area/poly1.area
# Form new DF
df_ol = pd.DataFrame({'id_1':[element],'id_2':[otherElement],'overlap_pct':[overlap_pct]})
# Write to SQL database
df_ol.to_sql(name='df_overlap', con=e,if_exists='append',index=False)
This function is inherently slow for large amounts of data due to its complexity (trying every 2-combination of a set). However, you're calculating the 'poly' for the same ids multiple times, even though it seems that you can calculate them only once beforehand (which might be expensive) and store them for later usage. So try to extract the building of the polys.
def getPolyForUniqueId(uid):
polyData = df[df['id'] == uid]
poly = build_poly(polyData)
poly = poly.buffer(0)
return polyData
def polyData(uniqueIds):
polyDataList = [getPolyForUniqueId(uid) for uid in uniqueIds]
for index in range(len(uniqueIds) - 1):
id_1 = uniqueIds[index]
poly_1 = polyDataList[index]
for secondIndex in range(index + 1, len(uniqueIds)):
id_2 = uniqueIds[secondIndex]
poly_2 = polyDataList[secondIndex]
...

Resources