List out of bounds - python-3.x

def f(x,t):
return x
t = np.linspace(3, 5, 7)
x = np.zeros(7)
h = 1
i = 0
while i <= len(t):
x[0] = 0
x[i+1] = x[i] + h* f(x[i], t[i])
i += 1
but I keep getting index 7 is out of bounds for axis 0 with size 7, how do I fix this?

In Python the indices start at 0 not 1. So if you want to iterate over the array, you have to iterate over the interval [0, len(list)-1]
Your while loop contains <= and also contains the index 7 because len(t) returns 7 in your case.
Plus in the while loop, you are computing the value at the next index i+1 depending on the previous value. So, when you arrive at the very last index, you are trying to compute the value at index 7 based on index 6. However, the index 7 does not exist. You are basically done at len(list) - 2
Just replace it to while i < len(t)-1. You would iterate till the index 5 and compute the value for index 6 which is the last index of your list

Related

how to get value of column2 when column 1 is greater 3 and check this value belong to which Bin

I have one dataframe with two columns , A and B . first i need to make empty bins with step 1 from 1 to 11 , (1,2),(2,3)....(10,11). then check from original dataframe if column B value greater than 3 then get value of column 'A' 2 rows before when column B is greater than 3.
Here is example dataframe :
df=pd.DataFrame({'A':[1,8.5,5.2,7,8,9,0,4,5,6],'B':[1,2,2,2,3.1,3.2,3,2,1,2]})
Required output 1:
df_out1=pd.DataFrame({'Value_A':[8.5,5.2]})
Required_output_2:
df_output2:
Bins count
(1 2) 0
(2,3) 0
(3,4) 0
(4,5) 0
(5,6) 1
(6,7) 0
(7,8) 0
(8,9) 1
(9,10) 0
(10,11) 0
You can index on a shifted series to get the two rows before 'A' satisfies some condition like
out1 = df['A'].shift(3)[df['B'] > 3]
The thing you want to do with the bins is known as a histogram. You can easily do this with numpy like
count, bin_edges = np.histogram(out1, bins=[i for i in range(1, 12)])
out2 = pd.DataFrame({'bin_lo': bin_edges[:-1], 'bin_hi': bin_edges[1:], 'count': count})
Here 'bin_lo' and 'bin_hi' are the lower and upper bounds of the bins.

I want to improve speed of my algorithm with multiple rows input. Python. Find average of consequitive elements in list

I need to find average of consecutive elements from list.
At first I am given lenght of list,
then list with numbers,
then am given how many test i need to perform(several rows with inputs),
then I am given several inputs to perform tests(and need to print as many rows with results)
every row for test consist of start and end element in list.
My algorithm:
nu = int(input()) # At first I am given lenght of list
numbers = input().split() # then list with numbers
num = input() # number of rows with inputs
k =[float(i) for i in numbers] # given that numbers in list are of float type
i= 0
while i < int(num):
a,b = input().split() # start and end element in list
i += 1
print(round(sum(k[int(a):(int(b)+1)])/(-int(a)+int(b)+1),6)) # round up to 6 decimals
But it's not fast enough.I was told it;s better to get rid of "while" but I don't know how. Appreciate any help.
Example:
Input:
8 - len(list)
79.02 36.68 79.83 76.00 95.48 48.84 49.95 91.91 - list
10 - number of test
0 0 - a1,b1
0 1
0 2
0 3
0 4
0 5
0 6
0 7
1 7
2 7
Output:
79.020000
57.850000
65.176667
67.882500
73.402000
69.308333
66.542857
69.713750
68.384286
73.668333
i= 0
while i < int(num):
a,b = input().split() # start and end element in list
i += 1
Replace your while-loop with a for loop. Also you could get rid of multiple int calls in the print statement:
for _ in range(int(num)):
a, b = [int(j) for j in input().split()]
You didn't spell out the constraints, but I am guessing that the ranges to be averaged could be quite large. Computing sum(k[int(a):(int(b)+1)]) may take a while.
However, if you precompute partial sums of the input list, each query can be answered in a constant time (sum of numbers in the range is a difference of corresponding partial sums).

How to calculate the time complexity for nested for loops in the following example?

So in the following code, I am trying I am passing a (huge)number-string to the function where I have to find the maximum product of consecutive m digits
So, first, I am looping through let's say n-string and then the inner loop looping through m numbers.
So the inner loop is affected by the if-statement which makes a jump of m indexes if the next number is 0.
EDIT : 1
Actual Problem Question:
The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
731671765313306249192251....(1000digits)
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
Example:
m = 12 number = "1234567891120123456704832...(1000 digits)"
So in 1st iteration function will calculate the product of 1st 12 digits(i.e. from index-11 to index-0 - "1234567891120123456704832..."
Now, in 2nd iteration when it checks the value at index-12 which is 0 then index will jump to index-13. This way the loop will skip 11 iterations.
For the 3rd Iteration, the inner loop will execute for 4 iterations until it finds 0 ("0123456704832...".
def LargestProductInSeries_1(number,m):
max = -1
product = 1
index = 0
x = 0
while index < len(number)-(m-1):
for j in range(index+(m-1), index-1, -1):
num = int(number[j])
if(not num):
index = j
break
product = product * int(number[j])
max = product if max < product else max
product = 1
index += 1
return max
So according to me, the Worst Case Time Complexity would be O(n*m)
I think the Best Time would be O(n/m) if only once the inner loop is completely iterated or every mth digit is 0 which will make the outer loop execute but the index will jump to every mth digit.
Is my analysis correct?
What will be the Average Time for this case?
Will it be O(n*(log m)). Can anyone explain how? Or how to find Complexity in such cases?

Selecting rows where a numeric column value change sign through openpyxl

I'm learning Python and openpyxl for data analysis on a large xlsx workbook. I have a for loop that can iterate down an entire column. Here's some example data:
ROW: VALUE:
1 1
2 2
3 3
4 4
5 -4
6 -1
7 -6
8 2
9 3
10 -3
I want to print out the row in which the value changes from positive to negative, and vice versa. So in the above example, row number 5, 8, and 10 would print in the console. How can I use an if statement within a for loop to iterate through a column on openpyxl?
So far I can print all of the cells in a column:
import openpyxl
wb = openpyxl.load_workbook('ngt_log.xlsx')
sheet = wb.get_sheet_by_name('sheet1')
for i in range(1, 10508, 1): # 10508 is the length of the column
print(i, sheet.cell(row=i, column=6).value)
My idea was to just add an if statement inside of the for loop:
for i in range(1, 10508, 1): # 10508 is the length of the column
if(( i > 0 and (i+1) < 0) or (i < 0 and (i+1) > 0)):
print((i+1), sheet.cell(row=i, column=6).value)
But that doesn't work. Am I formulating the if statement correctly?
It looks to me as though your statement is contradicting itself
for i in range(1, 10508, 1): # 10508 is the length of the column
if(( i greater than 0 and (i+1) less than 0) or (i less than 0 and (i+1) greater than
0)):
print((i+1), sheet.cell(row=i, column=6).value)
I wrote the > and < symbols in plain English but if i is greater than 0 then i + 1 is never less than 0 and vise versa so they will never work as both cannot be true
You need to get the sheet.cell values first, and then do the comparisons:
end_range = 10508
for i in range(1, end_range):
current, next = sheet.cell(row=i, column=6).value, sheet.cell(row=i+1, column=6).value
if current > 0 and next < 0 or current < 0 and next > 0:
print(i+1, next)
I am pretty sure there's a sign() function in the math library, but kinda overkill. You may also want to figure out what you want to do if the values are 0.
You can use a flag to check for positive and negative.
ws = wb['sheet1'] # why people persist in using long deprecated syntax is beyond me
flag = None
for row in ws.iter_rows(max_row=10508, min_col=6, max_col=6):
cell = row[0]
sign = cell.value > 0 and "positive" or "negative"
if flag is not None and sign != flag:
print(cell.row)
flag = sign
You can write the rules to select the rows where the sign has changed and put them in a generator expression without using extra memory, like this:
pos = lambda x: x>=0
keep = lambda s, c, i, v: pos(s[c][x].value)!=pos(v.value)
gen = (x+1 for x, y in enumerate(sheet['f']) if x>0 and keep(sheet, 'f', x-1, y))
Then, when you need to know the rows where the sign has changed, you just iterate on gen as below:
for row in gen:
# here you use row

how to get a kind of "maximum" in a matrix, efficiently

I have the following problem: I have a matrix opened with pandas module, where each cell has a number between -1 and 1. What I wanted to find is the maximum "posible" value in a row that is also not the maximum value in another row.
If for example 2 rows has their maximum value at the same column, I compare both values and take the bigger one, then for the row that has its maximum value smaller that the other row, I took the second maximum value (and do the same analysis again and again).
To explain myself better consider my code
import pandas as pd
matrix = pd.read_csv("matrix.csv")
# this matrix has an id (or name) for each column
# ... and the firt column has the id of each row
results = pd.DataFrame(np.empty((len(matrix),3),dtype=pd.Timestamp),columns=['id1','id2','max_pos'])
l = len(matrix.col[[0]]) # number of columns
while next = 1:
next = 0
for i in range(0, len(matrix)):
max_column = str(0)
for j in range(1, l): # 1 because the first column is an id
if matrix[max_column][i] < matrix[str(j)][i]:
max_column = str(j)
results['id1'][i] = str(i) # I coul put here also matrix['0'][i]
results['id2'][i] = max_column
results['max_pos'][i] = matrix[max_column][i]
for i in range(0, len(results)): #now I will check if two or more rows have the same max column
for ii in range(0, len(results)):
# if two id1 has their max in the same column, I keep it with the biggest
# ... max value and chage the other to "-1" to iterate again
if (results['id2'][i] == results['id2'][ii]) and (results['max_pos'][i] < results['max_pos'][ii]):
matrix[results['id2'][i]][i] = -1
next = 1
Putting an example:
#consider
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[4, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 4 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#at the first iterarion I will have the following result
0 b 4 # this means that the row 0 has its maximum at column 'b' and its value is 4
1 b 5
2 a 5
3 c 2
#the problem is that column b is the maximum of row 0 and 1, but I know that the maximum of row 1 is bigger than row 0, so I take the second maximum of row 0, then:
0 c 3
1 b 5
2 a 5
3 c 2
#now I solved the problem for row 0 and 1, but I have that the column c is the maximum of row 0 and 3, so I compare them and take the second maximum in row 3
0 c 3
1 b 5
2 a 5
3 d 1
#now I'm done. In the case that two rows have the same column as maximum and also the same number, nothing happens and I keep with that values.
#what if the matrix would be
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[5, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 5 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#then, at the first itetarion the result will be:
0 b 5
1 b 5
2 a 5
3 c 2
#then, given that the max value of row 0 and 1 is at the same column, I should compare the maximum values
# ... but in this case the values are the same (both are 5), this would be the end of iterating
# ... because I can't choose between row 0 and 1 and the other rows have their maximum at different columns...
This code works perfect to me if I have a matrix of 100x100 for example. But, if the matrix size goes to 50,000x50,000 the code takes to much time in finish it. I now that my code could be the most inneficient way to do it, but I don't know how to deal with this.
I have been reading about threads in python that could help but it doesn't help if I put 50,000 threads because my computer doesn't use more CPU. I also tried to use some functions as .max() but I'm not able to get column of the max an compare it with the other max ...
If anyone could help me of give me a piece of advice to make this more efficient I would be very grateful.
Going to need more information on this. What are you trying to accomplish here?
This will help you get some of the way, but in order to fully achieve what you're doing I need more context.
We'll import numpy, random, and Counter from collections:
import numpy as np
import random
from collections import Counter
We'll create a random 50k x 50k matrix of numbers between -10M and +10M
mat = np.random.randint(-10000000,10000000,(50000,50000))
Now to get the maximums for each row we can just do the following list comprehension:
maximums = [max(mat[x,:]) for x in range(len(mat))]
Now we want to find out which ones are not maximums in any other rows. We can use Counter on our maximums list to find out how many of each there are. Counter returns a counter object that is like a dictionary with the maximum as the key, and the # of times it appears as the value.
We then do dictionary comprehension where the value is == to 1. That will give us the maximums that only show up once. we use the .keys() function to grab the numbers themselves, and then turn it into a list.
c = Counter(maximums)
{9999117: 15,
9998584: 2,
9998352: 2,
9999226: 22,
9999697: 59,
9999534: 32,
9998775: 8,
9999288: 18,
9998956: 9,
9998119: 1,
...}
k = list( {x: c[x] for x in c if c[x] == 1}.keys() )
[9998253,
9998139,
9998091,
9997788,
9998166,
9998552,
9997711,
9998230,
9998000,
...]
Lastly we can do the following list comprehension to iterate through the original maximums list to get the indicies of where these rows are.
indices = [i for i, x in enumerate(maximums) if x in k]
Depending on what else you're looking to do we can go from here.
Its not the speediest program but finding the maximums, the counter, and the indicies takes 182 seconds on a 50,000 by 50,000 matrix that is already loaded.

Resources