Efficient Find Next Greater in Another Array - python-3.x

Is it possible to remove the for loops in this function and get a speed up in the process? I have not been able to get the same results with vector methods for this function. Or is there another option?
import numpy as np
indices = np.array(
[814, 935, 1057, 3069, 3305, 3800, 4093, 4162, 4449])
within = np.array(
[193, 207, 243, 251, 273, 286, 405, 427, 696,
770, 883, 896, 1004, 2014, 2032, 2033, 2046, 2066,
2079, 2154, 2155, 2156, 2157, 2158, 2159, 2163, 2165,
2166, 2167, 2183, 2184, 2208, 2210, 2212, 2213, 2221,
2222, 2223, 2225, 2226, 2227, 2281, 2282, 2338, 2401,
2611, 2612, 2639, 2640, 2649, 2700, 2775, 2776, 2785,
3030, 3171, 3191, 3406, 3427, 3527, 3984, 3996, 3997,
4024, 4323, 4331, 4332])
def get_first_ind_after(indices, within):
"""returns array of the first index after each listed in indices
indices and within must be sorted ascending
"""
first_after_leading = []
for index in indices:
for w_ind in within:
if w_ind > index:
first_after_leading.append(w_ind)
break
# convert to np array
first_after_leading = np.array(first_after_leading).flatten()
return np.unique(first_after_leading)
It should return the next greatest number for each in the indices array if there is one.
# Output:
[ 883 1004 2014 3171 3406 3984 4323]

Here's one based on np.searchsorted -
def next_greater(indices, within):
idx = np.searchsorted(within, indices)
idxv = idx[idx<len(within)]
idxv_unq = np.unique(idxv)
return within[idxv_unq]
Alternatively, idxv_unq could be computed like so and should be more efficient -
idxv_unq = idxv[np.r_[True,idxv[:-1] != idxv[1:]]]

Try this:
[within[within>x][0] if len(within[within>x])>0 else 0 for x in indices]
As in,
In [35]: import numpy as np
...: indices = np.array([814, 935, 1057, 3069, 3305, 3800, 4093, 4162, 4449])
...:
...: within = np.array(
...: [193, 207, 243, 251, 273, 286, 405, 427, 696,
...: 770, 883, 896, 1004, 2014, 2032, 2033, 2046, 2066,
...: 2079, 2154, 2155, 2156, 2157, 2158, 2159, 2163, 2165,
...: 2166, 2167, 2183, 2184, 2208, 2210, 2212, 2213, 2221,
...: 2222, 2223, 2225, 2226, 2227, 2281, 2282, 2338, 2401,
...: 2611, 2612, 2639, 2640, 2649, 2700, 2775, 2776, 2785,
...: 3030, 3171, 3191, 3406, 3427, 3527, 3984, 3996, 3997,
...: 4024, 4323, 4331, 4332])
In [36]: [within[within>x][0] if len(within[within>x])>0 else 0 for x in indices]
Out[36]: [883, 1004, 2014, 3171, 3406, 3984, 4323, 4323, 0]
This is the pythonic approach called list comprehension it's a shortened version of a foreach loop. So if I were to expand this out:
result = []
for x in indices:
# This next line is a boolean index into the array, if returns all of the items in the array that have a value greater than x
y = within[within>x]
# At this point, y is an array of all the items which are larger than x. Since you wanted the first of these items, we'll just take the first item off of this new array, but it is possible that y is None (there are no values that match the condition), so there is a check for that
if len(y) > 0:
z = y[0]
else:
z = 0 # or None or whatever you like
# Now add this value to the array that we are building
result.append(z)
# Now result has the array
I wrote it this way, because it uses the vector operations (i.e. the boolean mask) and also leverages list comprehension, which is a much cleaner simpler way to write a foreach which returns an array.

Related

How to add and concate two variables in same list

array = [[1676, 196, 159, 29, 'invoice'], [1857, 198, 108, 28, 'date:']]
width = 159+108 = 267
height = 29+28 = 57
label = invoice date:
Required solution: [1676, 196, 267, 57, 'invoice date:']
Is there any solution to concatenate string and add numbers in same list
Assuming your other lines are for your own notes/testing, and your "required solution" is a typo, you can use zip like so:
array = [[1676, 196, 159, 29, 'invoice'], [1857, 198, 108, 28, 'date:']]
res = []
for x, y in zip(array[0], array[1]): # compare values at each index
if isinstance(x, str): # check to see if x is a string
res.append(x + ' ' + y) # add a space if x is a string
else:
res.append(x + y) # add the two integers if x is not a string
print(res)
[3533, 394, 267, 57, 'invoice date:']
Note that the concatenation above will only work if you are sure that the same indices will have strings vs. integers.

KeyError: 0 - Function works initially, but returns error when called on other data

I have a function:
def create_variables(name, probabilities, labels):
print('function called')
model = Metrics(probabilities, labels)
prec_curve = model.precision_curve()
kappa_curve = model.kappa_curve()
tpr_curve = model.tpr_curve()
fpr_curve = model.fpr_curve()
pr_auc = auc(tpr_curve, prec_curve)
roc_auc = auc(fpr_curve, tpr_curve)
auk = auc(fpr_curve, kappa_curve)
return [name, prec_curve, kappa_curve, tpr_curve, fpr_curve, pr_auc, roc_auc, auk]
I have the following variables:
svm = pd.read_csv('SVM.csv')
svm_prob_1 = svm.probability[svm.fold_number == 1]
svm_prob_2 = svm.probability[svm.fold_number == 2]
svm_label_1 = svm.true_label[svm.fold_number == 1]
svm_label_2 = svm.true_label[svm.fold_number == 2]
I want to execute the following lines:
svm1 = create_variables('svm_fold1', svm_prob_1, svm_label_1)
svm2 = create_variables('svm_fold2', svm_prob_2, svm_label_2)
Python works as expected for svm1. However, when it starts processing svm2, I receive the following error:
svm2 = create_variables('svm_fold2', svm_prob_2, svm_label_2)
function called
Traceback (most recent call last):
File "<ipython-input-742-702cfac4d100>", line 1, in <module>
svm2 = create_variables('svm_fold2', svm_prob_2, svm_label_2)
File "<ipython-input-741-b8b5a84f0298>", line 6, in create_variables
prec_curve = model.precision_curve()
File "<ipython-input-734-dd9c309be961>", line 59, in precision_curve
self.tp, self.tn, self.fp, self.fn = self.confusion_matrix(self.preds)
File "<ipython-input-734-dd9c309be961>", line 72, in confusion_matrix
if pred == self.labels[i]:
File "C:\Users\20200016\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py", line 1068, in __getitem__
result = self.index.get_value(self, key)
File "C:\Users\20200016\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 4730, in get_value
return self._engine.get_value(s, k, tz=getattr(series.dtype, "tz", None))
File "pandas\_libs\index.pyx", line 80, in pandas._libs.index.IndexEngine.get_value
File "pandas\_libs\index.pyx", line 88, in pandas._libs.index.IndexEngine.get_value
File "pandas\_libs\index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 992, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 998, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
svm_prob_1 and svm_prob_2 are both of the same shape and contain non-zero values. svm_label_2 contains 0's and 1's and has the same length as svm_prob_2.
Furthermore, the error seems to be in svm_label_1. After changing this variable, the following line does work:
svm2 = create_variables('svm_fold2', svm_prob_2, svm_label_1
Based on the code below, there seems to be no difference between svm_label_1 and svm_label_2 though.
type(svm_label_1)
Out[806]: pandas.core.series.Series
type(svm_label_2)
Out[807]: pandas.core.series.Series
min(svm_label_1)
Out[808]: 0
min(svm_label_2)
Out[809]: 0
max(svm_label_1)
Out[810]: 1
max(svm_label_2)
Out[811]: 1
sum(svm_label_1)
Out[812]: 81
sum(svm_label_2)
Out[813]: 89
len(svm_label_1)
Out[814]: 856
len(svm_label_2)
Out[815]: 856
Does anyone know what's going wrong here?
I don't know why it works, but converting svm_label_2 into a list worked:
svm_label_2 = list(svm.true_label[svm.fold_number == 2])
Since, svm_label_1 and svm_label_2 are of the same type, I don't understand why the latter raised an error and the first one did not. Therefore, I still welcome any explanation to this phenomenon.

How do I parse list of dict of dict of ... dict to dataframe?

I've got a list of dictionaries of dictionaries... Basically, it is just big piece of JSON. Here how looks like one dict from a list:
{'id': 391257, 'from_id': -1, 'owner_id': -1, 'date': 1554998414, 'marked_as_ads': 0, 'post_type': 'post', 'text': 'Весна — время обновлений. Очищаем балконы от старых лыж и API от устаревших версий: уже скоро запросы к API c версией ниже 5.0 перестанут поддерживаться.\n\nОжидаемая дата изменений: 15 мая 2019 года. \n\nПодробности в Roadmap: https://vk.com/dev/version_update_2.0', 'post_source': {'type': 'vk'}, 'comments': {'count': 91, 'can_post': 1, 'groups_can_post': True}, 'likes': {'count': 182, 'user_likes': 0, 'can_like': 1, 'can_publish': 1}, 'reposts': {'count': 10, 'user_reposted': 0}, 'views': {'count': 63997}, 'is_favorite': False}
And I want to dump each dict to frame. if I just do
data = pandas.DataFrame(list_of_dicts)
I get a frame where are only two columns: first one contains keys, and another one contains data, like this:
I tried doing it in a loop:
for i in list_of_dicts:
tmp = pandas.DataFrame().from_dict(i)
data = pandas.concat([data, tmp])
print(i)
But I face ValueError:
Traceback (most recent call last):
File "/home/keddad/PycharmProjects/vk_group_parse/Data Grabber.py", line 68, in <module>
main()
File "/home/keddad/PycharmProjects/vk_group_parse/Data Grabber.py", line 61, in main
tmp = pandas.DataFrame().from_dict(i)
File "/home/keddad/anaconda3/envs/vk_group_parse/lib/python3.7/site-packages/pandas/core/frame.py", line 1138, in from_dict
return cls(data, index=index, columns=columns, dtype=dtype)
File "/home/keddad/anaconda3/envs/vk_group_parse/lib/python3.7/site-packages/pandas/core/frame.py", line 392, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "/home/keddad/anaconda3/envs/vk_group_parse/lib/python3.7/site-packages/pandas/core/internals/construction.py", line 212, in init_dict
return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "/home/keddad/anaconda3/envs/vk_group_parse/lib/python3.7/site-packages/pandas/core/internals/construction.py", line 51, in arrays_to_mgr
index = extract_index(arrays)
File "/home/keddad/anaconda3/envs/vk_group_parse/lib/python3.7/site-packages/pandas/core/internals/construction.py", line 320, in extract_index
raise ValueError('Mixing dicts with non-Series may lead to '
ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.
How, after this, I can get dataframe with one post (one dictionary in the list is one post) and all the data in it as columns?
I can't figure out the df exactly but I think you simply need to do a reset_index and all the data which is currently(it seems):
df.reset_index(inplace=True)
Another thing if you want the keys as columns:
df = pd.Dataframe.from_dict(orient='columns')
# or try `index` in columns if you don't get desired results
In a for loop:
l = []
for i in dict.keys:
l.append(pd.DataFrame.from_dict(dict[i], orient='columns'))
df = pd.concat(l)
Not quite sure what you are trying to do, but do you mean something like this?
You can see inside the data by just printing the dataframe. Or you can print each one by the following code.
data = pandas.DataFrame(list_of_dicts)
print(data)
for i in data.loc[:, data.columns]:
print(data[i])

pymc3 model with ODE solver using theano

I am using a model where the mean response depends on the solution to an ODE. I am trying to fit this model using pymc3, but am running into problems (relating to missing test values) when joining the ODE solver to the model.
Model
y_t is Lognormally distributed with mean mu_t and standard deviation sigma.
mu_t is the solution to a set of ODE's at time t.
Problem
Theano/ pymc3 gives an error because the theano tensor variables used in solving the ODE have no test values. See below for a copy of the errors. I've tried setting
th.config.compute_test_value = 'ignore'
but I think that pymc3 changes it back to require test values. I am fairly new to theano and pymc3, so I apologise if I am missing something obvious.
Code
Imports
import pymc3 as pm
import theano as th
import theano.tensor as tt
from FormatData import *
import pandas as pd
Functions to solve ODE
# Runge Kutta integrator
def rungekuttastep(h, y, fprime, *args):
k1 = h*fprime(y, *args)
k2 = h*fprime(y + 0.5*k1, *args)
k3 = h*fprime(y + 0.5*k2, *args)
k4 = h*fprime(y + k3, *args)
y_np1 = y + (1./6.)*k1 + (1./3.)*k2 + (1./3.)*k3 + (1./6.)*k4
return y_np1
# ODE equations for my model
def ODE(y, *args):
alpha = args
yprime = tt.zeros_like(y)
yprime = tt.set_subtensor(yprime[0], alpha[0]*y[1] - alpha[1]*y[0])
yprime = tt.set_subtensor(yprime[1], -alpha[2]*y[0]*y[1])
return yprime
# Function to return ODE values given parameters
def calcFittedTitreVals(alpha, niter, hsize, inits):
y0 = tt.vector('y0')
h = tt.scalar('h')
i = tt.iscalar('i')
alpha0 = tt.scalar('alpha0')
alpha1 = tt.scalar('alpha1')
alpha2 = tt.scalar('alpha2')
result, updates = th.scan(fn=lambda y0, h: rungekuttastep(h, y0, ODE, alpha0, alpha1, alpha2),
outputs_info=[{'initial': y0}], non_sequences=h, n_steps=i)
odeint = th.function(inputs=[h, y0, i, alpha0, alpha1, alpha2], outputs=result, updates=updates)
z1 = odeint(h=hsize, # size of the interval
y0=inits, # starting values
i=niter, # number of iterations
alpha0=alpha[0], alpha1=alpha[1], alpha2=alpha[2])
C = z1[:, 0]
A = z1[:, 1]
return C, A
Generate Data
t = np.arange(0, 45, 0.1) # times at which to solve ODE
alpha = np.array([2, 0.4, 0.0001]) # true paramter values ODE
C, A = calcFittedTitreVals(alpha, niter=450, hsize=0.1, inits=[0, 1200])
td = np.arange(0, 45, 1) # times at which I observe data
sigma = 0.1
indices = np.array(np.searchsorted(t, td)).flatten()
DATA = pd.DataFrame(
data={'observed': np.random.lognormal(np.log(C[indices]), sigma),
'true': C[indices], 'time': td})
pymc3 model function
def titreLogNormal(Y, hsize, inits, times):
Y = th.shared(Y)
inits = th.shared(inits)
timesG = np.arange(0, 45, step=hsize)
indices = np.array(np.searchsorted(timesG, times)).flatten()
nTsteps = th.shared(timesG.shape[0])
hsize = th.shared(hsize)
y0 = tt.vector('y0')
h = tt.scalar('h')
i = tt.iscalar('i')
alpha0 = tt.scalar('alpha0')
alpha1 = tt.scalar('alpha1')
alpha2 = tt.scalar('alpha2')
result, updates = th.scan(fn=lambda y0, h: rungekuttastep(h, y0, ODE, alpha0, alpha1, alpha2),
outputs_info=[{'initial': y0}], non_sequences=h, n_steps=i)
odeint = th.function(inputs=[h, y0, i, alpha0, alpha1, alpha2], outputs=result, updates=updates)
model = pm.Model()
with model:
alpha = pm.Gamma('alpha', 0., 10., shape=3, testval=[2, 0.4, 0.001])
sigma = pm.Gamma('sigma', 0.1, 0.1, testval=0.1)
res = odeint(h=hsize, y=inits, i=nTsteps, alpha0=alpha[0], alpha1=alpha[1], alpha2=alpha[2])
mu = pm.Deterministic("mu", res[indices, 0])
y = pm.Lognormal('y', mu, sigma, observed=Y)
return model
Create model with data
model = titreLogNormal(
Y=np.array(DATA[['observed']]).flatten(),
hsize=0.1, inits={'a': 0, 'p': 1200},
times=np.array(DATA[['time']]).flatten())
Errors
Traceback (most recent call last):
File "/home/millerp/.local/lib/python3.5/site-packages/theano/gof/op.py", line 625, in __call__
storage_map[ins] = [self._get_test_value(ins)]
File "/home/millerp/.local/lib/python3.5/site-packages/theano/gof/op.py", line 581, in _get_test_value
raise AttributeError('%s has no test value %s' % (v, detailed_err_msg))
AttributeError: y0 has no test value
Backtrace when that variable is created:
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_console_utils.py", line 251, in add_exec
more = self.do_add_exec(code_fragment)
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_ipython_console.py", line 41, in do_add_exec
res = bool(self.interpreter.add_exec(codeFragment.text))
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 455, in add_exec
self.ipython.run_cell(line, store_history=True)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-54c976fefe1e>", line 99, in <module>
times=np.array(DATA[['time']]).flatten()
File "<ipython-input-2-54c976fefe1e>", line 71, in titreLogNormal
y0 = tt.vector('y0')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-54c976fefe1e>", line 99, in <module>
times=np.array(DATA[['time']]).flatten()
File "<ipython-input-2-54c976fefe1e>", line 86, in titreLogNormal
outputs_info=[{'initial': y0}], non_sequences=h, n_steps=i)
File "/home/millerp/.local/lib/python3.5/site-packages/theano/scan_module/scan.py", line 660, in scan
tensor.shape_padleft(actual_arg), 0),
File "/home/millerp/.local/lib/python3.5/site-packages/theano/tensor/basic.py", line 4429, in shape_padleft
return DimShuffle(_t.broadcastable, pattern)(_t)
File "/home/millerp/.local/lib/python3.5/site-packages/theano/gof/op.py", line 639, in __call__
(i, ins, node, detailed_err_msg))
ValueError: Cannot compute test value: input 0 (y0) of Op InplaceDimShuffle{x,0}(y0) missing default value.
Backtrace when that variable is created:
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_console_utils.py", line 251, in add_exec
more = self.do_add_exec(code_fragment)
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_ipython_console.py", line 41, in do_add_exec
res = bool(self.interpreter.add_exec(codeFragment.text))
File "/home/millerp/pycharm/pycharm-edu-3.5.1/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 455, in add_exec
self.ipython.run_cell(line, store_history=True)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-54c976fefe1e>", line 99, in <module>
times=np.array(DATA[['time']]).flatten()
File "<ipython-input-2-54c976fefe1e>", line 71, in titreLogNormal
y0 = tt.vector('y0')

Receiving Type Error: 0 while updating pandas df using Data Nitro

I am updating a Pandas Data Frame.
The script looks up for a product. If the product is already in data frame, it just updates it columns with accumulated new values.
If the product is not there it creates a new set of rows to insert the values of the product.
Code
for m in range(0,len(product_sales_price)):
if exact_match(str(sales_record[n-1]),str(product_sales_price[m]))==True:
total_product_daily_sales = counter * product_sales_price[m+1]
'''
print(total_product_daily_sales)
'''
total_product_daily_net_profit = total_product_daily_sales *.1
print(counter)
print(product_sales_price[m+1])
print(total_product_daily_sales)
print(total_product_daily_net_profit)
print(m)
print(product_sales_price[m])
if (product_revenue_and_net_profit_df.ix[:,0] == product_sales_price[m]).any() == True :
product_revenue_and_net_profit_df.ix[:,:][(product_revenue_and_net_profit_df.ix[:,
0] == product_sales_price[m])] = [
product_revenue_and_net_profit_df.ix[:,0][(product_revenue_and_net_profit_df.ix[:,
0] == product_sales_price[m])],
product_revenue_and_net_profit_df.ix[:,1][(product_revenue_and_net_profit_df.ix[:,
0] == product_sales_price[m])]+counter,
product_revenue_and_net_profit_df.ix[:,2][(product_revenue_and_net_profit_df.ix[:,
0] == product_sales_price[
m])]+total_product_daily_sales,product_revenue_and_net_profit_df.ix[:,
3][(product_revenue_and_net_profit_df.ix[:,0] == product_sales_price[
m])]+total_product_daily_net_profit]
else:
product_revenue_and_net_profit_df.ix[(product_revenue_and_net_profit_df.shape[0]+1),:] = (
[product_sales_price[m],counter,total_product_daily_sales,
total_product_daily_net_profit]
)
Run Time
<sale_frequency time (in seconds):
1
423.44
423.44
42.344
0
Bushwacker Dodge Pocket Style Fender Flare Set of 4
Traceback (most recent call last):
File "32\scriptStarter.py", line 120, in <module>
File "C:\Python Projects\Amazon-Sales\amazon_analysis.py", line 162, in <module>
print (timeit.timeit(fn + "()", "from __main__ import "+fn, number=1))
File "C:\Users\onthego\Anaconda3\lib\timeit.py", line 219, in timeit
return Timer(stmt, setup, timer).timeit(number)
File "C:\Users\onthego\Anaconda3\lib\timeit.py", line 184, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
File "C:\Python Projects\Amazon-Sales\amazon_analysis.py", line 91, in sale_frequency
m])]+total_product_daily_net_profit]
File "C:\Users\onthego\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2122, in __setitem__
self._setitem_array(key, value)
File "C:\Users\onthego\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2142, in _setitem_array
self.ix._setitem_with_indexer(indexer, value)
File "C:\Users\onthego\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 448, in _setitem_with_indexer
elif np.array(value).ndim == 2:
File "C:\Users\onthego\Anaconda3\lib\site-packages\pandas\core\series.py", line 521, in __getitem__
result = self.index.get_value(self, key)
File "C:\Users\onthego\Anaconda3\lib\site-packages\pandas\core\index.py", line 1595, in get_value
return self._engine.get_value(s, k)
File "pandas\index.pyx", line 100, in pandas.index.IndexEngine.get_value (pandas\index.c:3113)
File "pandas\index.pyx", line 108, in pandas.index.IndexEngine.get_value (pandas\index.c:2844)
File "pandas\index.pyx", line 154, in pandas.index.IndexEngine.get_loc (pandas\index.c:3704)
File "pandas\hashtable.pyx", line 375, in pandas.hashtable.Int64HashTable.get_item (pandas\hashtable.c:7224)
File "pandas\hashtable.pyx", line 381, in pandas.hashtable.Int64HashTable.get_item (pandas\hashtable.c:7162)
KeyError: 0
>>>
>>>
>>>

Resources