I would like to plot a line plot (source: pandas dataframe) over a hvplot (source: xarray/ NetCDF).
The xarray looks like this:
dataDIR = 'ceilodata.nc'
DS = xr.open_dataset(dataDIR)
DS = DS.transpose()
print(DS)
<xarray.Dataset>
Dimensions: (range_hr: 32, range: 1024, layer: 3, time: 5760)
Coordinates:
* range_hr (range_hr) float32 0.001 4.995 9.99 ... 144.9 149.9 154.8
* range (range) float32 14.98 29.97 44.96 ... 1.533e+04 1.534e+04
* layer (layer) int32 1 2 3
* time (time) datetime64[ns] 2022-03-18 ... 2022-03-18T23:59:46
Data variables: (12/41)
zenith float32 ...
wavelength float32 ...
scaling float32 ...
range_gate_hr float32 ...
range_gate float32 ...
longitude float32 ...
... ...
cbe (layer, time) int16 ...
beta_raw_hr (range_hr, time) float32 ...
beta_raw (range, time) float32 ...
bcc (time) int8 ...
base (time) float32 ...
average_time (time) int32 ...
Attributes: (12/13)
comment:
software_version: 15.06.1 2.13 1.040 1
title: CHM15k Nimbus
wmo_id: 10865
month: 3
source: CHM160138
... ...
serlom: TUB160038
location: muenchen
year: 2022
device_name: CHM160138
institution: DWD
day: 18
The pandas dataframe source looks like this:
df = pd.read_csv('PTU.csv')
print(df)
Unnamed: 0 PTU
0 2022-03-18 07:38:56 451.839
1 2022-03-18 07:38:57 468.826
2 2022-03-18 07:38:58 469.093
3 2022-03-18 07:38:59 469.356
4 2022-03-18 07:39:00 469.623
... ... ...
6140 2022-03-18 09:21:16 31690.600
6141 2022-03-18 09:21:17 31694.700
6142 2022-03-18 09:21:18 31692.900
6143 2022-03-18 09:21:19 31712.000
6144 2022-03-18 09:21:20 31711.500
[6145 rows x 2 columns]
Both are time dependend datasets but have different time stamps and frequencies. Time is index in each data set.
I tried to plot them together with additional imports of holoviews. While each single plot is no problem, plotting them together seems not to work the way I tried it:
import hvplot.pandas
import holoviews as hv
# cmap of the xarray:
ceilo = (DS.b_r.hvplot(cmap="viridis_r", width = 850, height = 600, title = 'title', clim = (5, 80))
# line plot of the data frame
p = df.hvplot.line()
# add pressure line plot to pcolormeshplot using * which overlays the line on the plot
ceilo * p
but this ended in an error message with the following complete traceback:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-10-2b1c6baca339> in <module>
24 p = df.hvplot.line()
25 # add pressure line plot to pcolormeshplot using * which overlays the line on the plot
---> 26 ceilo * df
c:\python38\lib\site-packages\pandas\core\ops\common.py in new_method(self, other)
68 other = item_from_zerodim(other)
69
---> 70 return method(self, other)
71
72 return new_method
c:\python38\lib\site-packages\pandas\core\arraylike.py in __rmul__(self, other)
118 #unpack_zerodim_and_defer("__rmul__")
119 def __rmul__(self, other):
--> 120 return self._arith_method(other, roperator.rmul)
121
122 #unpack_zerodim_and_defer("__truediv__")
c:\python38\lib\site-packages\pandas\core\frame.py in _arith_method(self, other, op)
6936 other = ops.maybe_prepare_scalar_for_op(other, (self.shape[axis],))
6937
-> 6938 self, other = ops.align_method_FRAME(self, other, axis, flex=True, level=None)
6939
6940 new_data = self._dispatch_frame_op(other, op, axis=axis)
c:\python38\lib\site-packages\pandas\core\ops\__init__.py in align_method_FRAME(left, right, axis, flex, level)
275 elif is_list_like(right) and not isinstance(right, (ABCSeries, ABCDataFrame)):
276 # GH 36702. Raise when attempting arithmetic with list of array-like.
--> 277 if any(is_array_like(el) for el in right):
278 raise ValueError(
279 f"Unable to coerce list of {type(right[0])} to Series/DataFrame"
c:\python38\lib\site-packages\holoviews\core\element.py in __iter__(self)
94 def __iter__(self):
95 "Disable iterator interface."
---> 96 raise NotImplementedError('Iteration on Elements is not supported.')
97
98
NotImplementedError: Iteration on Elements is not supported.
Is the different time frequency a problem here? The line plot should be orientated along the x- and the y-axis considering the right time stamp and altitude of the underlying cmap-(matplotlib)-plot.
To illustrate what I am aiming for, here is a picture of my goal:
Thanks for reading / helping.
I found a solution for this case:
Both dataset time columns have to have the same format. In my case it's: datetime64[ns] (to adopt to the NetCDF xarray). That is why I converted the dataframe time column to datetime64[ns]:
df.Datetime = df.Datetime.astype('datetime64')
Also I found the data to be type "object". So I transformed it to "float":
df.PTU = df.PTU.astype(float) # convert to correct data type
The last step was choosing hvplot as this helps in plotting xarray data
import hvplot.xarray
hvplot.quadmesh
And here is my final solution:
title = ('Ceilo data + '\ndate: '+ str(DS.year) + '-' + str(DS.month) + '-' + str(DS.day))
ceilo = (DS.br.hvplot.quadmesh(cmap="viridis_r", width = 850, height = 600, title = title,
clim = (1000, 10000), # set colorbar limits
cnorm = ('log'), # choose log scale
clabel = ('colorbar title'),
rot = 0 # degree rotation of ticks
)
)
# from: https://justinbois.github.io/bootcamp/2020/lessons/l27_holoviews.html
# take care! may take 2...3 minutes to be ploted:
p = hv.Points(data=df,
kdims=['Datetime', 'PTU'],
).opts(#alpha=0.7,
color='red',
size=1,
ylim=(0, 5000))
# add PTU line plot to quadmesh plot using * which overlays the line on the plot
ceilo * p
Related
I have a year wise (1980-2020) precipitation data set in netCDF format. I am importing them in xarray to have 40 years of merged precipitation values:
import netCDF4
import numpy
import xarray as xr
import pandas as pd
prcp=xr.open_mfdataset('/home/hrsa/Sayantan/HAR_V2/prcp/HARv2_d10km_d_2d_prcp_*.nc',combine = 'nested', concat_dim="time")
prcp
which renders:
xarray.Dataset
Dimensions:
time: 14976west_east: 381south_north: 252
Coordinates:
time
(time)
datetime64[ns]
1980-01-01 ... 2020-12-31
west_east
(west_east)
float32
-1.675e+06 -1.665e+06 ... 2.125e+06
south_north
(south_north)
float32
-7.45e+05 -7.35e+05 ... 1.765e+06
lon
(south_north, west_east)
float32
dask.array<chunksize=(252, 381), meta=np.ndarray>
lat
(south_north, west_east)
float32
dask.array<chunksize=(252, 381), meta=np.ndarray>
Data variables:
prcp
(time, south_north, west_east)
float32
dask.array<chunksize=(366, 252, 381), meta=np.ndarray>
Attributes: (33)
This a large dataset, hence I am required to subset it according to an SRTM image whose extents (in EPSG:4326) is defined as
# Extents of the SRTM DEM covering Panchi_B and the SASE AWS/Base Camp
min_lon = 77.0
min_lat = 32.0
max_lon = 78.0
max_lat = 33.0
In order to subset according to above coordinates I have tried the following:
prcp = prcp.sel(lat = slice(min_lat,max_lat), lon = slice(min_lon,max_lon))
the Error output:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/.pyenv/versions/3.9.7/envs/v3.9.7/lib/python3.9/site-packages/xarray/core/indexing.py:73, in group_indexers_by_index(data_obj, indexers, method, tolerance)
72 try:
---> 73 index = xindexes[key]
74 coord = data_obj.coords[key]
KeyError: 'lat'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
Input In [25], in <cell line: 1>()
----> 1 prcp = prcp.sel(lat = slice(min_lat,max_lat), lon = slice(min_lon,max_lon))
File ~/.pyenv/versions/3.9.7/envs/v3.9.7/lib/python3.9/site-packages/xarray/core/dataset.py:2501, in Dataset.sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
2440 """Returns a new dataset with each array indexed by tick labels
2441 along the specified dimension(s).
2442
(...)
2498 DataArray.sel
2499 """
2500 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel")
-> 2501 pos_indexers, new_indexes = remap_label_indexers(
2502 self, indexers=indexers, method=method, tolerance=tolerance
2503 )
2504 # TODO: benbovy - flexible indexes: also use variables returned by Index.query
2505 # (temporary dirty fix).
2506 new_indexes = {k: v[0] for k, v in new_indexes.items()}
File ~/.pyenv/versions/3.9.7/envs/v3.9.7/lib/python3.9/site-packages/xarray/core/coordinates.py:421, in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs)
414 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "remap_label_indexers")
416 v_indexers = {
417 k: v.variable.data if isinstance(v, DataArray) else v
418 for k, v in indexers.items()
419 }
--> 421 pos_indexers, new_indexes = indexing.remap_label_indexers(
422 obj, v_indexers, method=method, tolerance=tolerance
423 )
424 # attach indexer's coordinate to pos_indexers
425 for k, v in indexers.items():
File ~/.pyenv/versions/3.9.7/envs/v3.9.7/lib/python3.9/site-packages/xarray/core/indexing.py:110, in remap_label_indexers(data_obj, indexers, method, tolerance)
107 pos_indexers = {}
108 new_indexes = {}
--> 110 indexes, grouped_indexers = group_indexers_by_index(
111 data_obj, indexers, method, tolerance
112 )
114 forward_pos_indexers = grouped_indexers.pop(None, None)
115 if forward_pos_indexers is not None:
File ~/.pyenv/versions/3.9.7/envs/v3.9.7/lib/python3.9/site-packages/xarray/core/indexing.py:84, in group_indexers_by_index(data_obj, indexers, method, tolerance)
82 except KeyError:
83 if key in data_obj.coords:
---> 84 raise KeyError(f"no index found for coordinate {key}")
85 elif key not in data_obj.dims:
86 raise KeyError(f"{key} is not a valid dimension or coordinate")
KeyError: 'no index found for coordinate lat'
How can I resolve this issue? Any help will be appreciated, Thank you.
############# Edit (for #Robert Wilson) ##################
In order to find out the ranges, I did the following:
lon = prcp.lon.to_dataframe()
lon
lat = prcp.lat.to_dataframe()
lat
This is part of my implementation of the K-medoids algorithm. I tested the algorithm with small images that I'm generating randomly and works fine (6X6). The problem comes when I use a real image (620x412), the algorithm is still converging but is taking approximately 5 minutes per iteration. After profiling my code, I detected that np.linalg.norm is causing the bottleneck, but I'm not sure what I'm doing wrong here
Call count Time(ms) Own Time(ms)
<method 'reduce' of 'numpy.ufunc' objects> 88329 70672 70672
norm 44164 152196 46999
<method 'astype' of 'numpy.ndarray' objects> 44165 38445 38445
manhattan_distance 44159 199809 37795
<built-in method numpy.core._multiarray_umath.implement_array_function> 88334 166973 9512
update_medoids 1 103862 658
This is my implementation of update_medoids and manhattan distance
def manhattan_distance(pixels, medoids):
"""
:param pixels: Array of RGB pixels
:param medoids: pixels selected as medoids. It can be a (3,) array (single medoid), or a vector (k, 3) for
multiple medoidss
:return:
"""
if len(medoids.shape) == 1:
medoids = medoids.reshape(1, len(medoids))
distance = np.linalg.norm(pixels - medoids, ord=1, axis=1)
else:
distance = np.zeros((pixels.shape[0], len(medoids)))
for medoid_idx in range(len(medoids)):
medoid = medoids[medoid_idx].reshape(1, len(medoids[medoid_idx]))
distance[:, medoid_idx] = np.linalg.norm(pixels - medoid, ord=1, axis=1)
return distance
update medoids function:
def update_medoids(pixels, medoids, distance):
'''
:param pixels: Array of RGB pixels
:param medoids: vector (k, 3) of pixels selected as medoids.
:param distance: distance function used in algorithm
:return: new vector of medoids with swapped members, if any.
'''
distances = distance(pixels, medoids)
labels = assign_labels(distances)
new_medoids = medoids
for cluster in set(labels):
cluster_dissimilarity = np.sum(distance(pixels, medoids[cluster]))
cluster_points = pixels[labels == cluster]
for data_point in cluster_points:
hypothesis_medoid = data_point
temp_cluster_dissimilarity = np.sum(distance(pixels, hypothesis_medoid))
if temp_cluster_dissimilarity < cluster_dissimilarity:
cluster_dissimilarity = temp_cluster_dissimilarity
new_medoids[cluster] = hypothesis_medoid
return new_medoids
My suspicion is in np.linalg.norm(pixels - medoid, ord=1, axis=1) but my only guess is that broadcasting is slowing down the calculation, although that doesn't sound realistic. thoughts?
EDIT
input samples
pixels is a vector of pixels that is obtained by reshaping the matrix representation of an image, that is
[[[232 10 22] [213 76 156] [156 232 103]]
[[116 160 12] [115 188 118] [ 74 42 106]]]
[[157 30 36] [ 98 89 173] [142 76 225]]]
is reshaped into a vector that looks like this
[[232 10 22]
[213 76 156]
[156 232 103]
[116 160 12]
[115 188 118]
[ 74 42 106]
[157 30 36]
[ 98 89 173]
[142 76 225]]
medoids is a subset of pixels and act as representatives of each cluster.
I am trying to create a for loop which uses a defined function (B_lambda) and takes in values of wavelength and temperature to produce values of intensity. i.e. I want the loop to take the function B_lambda and to run through every value within my listed wavelength range for each temperature in the temperature list. Then I want to plot the results. I am not very good with the syntax and have tried many ways but nothing is producing what I need and I am mostly getting errors. I have no idea how to use a for loop to plot and all online sources that I have checked out have not helped me with using a defined function in a for loop. I will put my latest code that seems to have the least errors down below with the error message:
import matplotlib.pylab as plt
import numpy as np
from astropy import units as u
import scipy.constants
%matplotlib inline
#Importing constants to use.
h = scipy.constants.h
c = scipy.constants.c
k = scipy.constants.k
wavelengths= np.arange(1000,30000)*1.e-10
temperature=[3000,4000,5000,6000]
for lam in wavelengths:
for T in temperature:
B_lambda = ((2*h*c**2)/(lam**5))*((1)/(np.exp((h*c)/(lam*k*T))-1))
plt.figure()
plt.plot(wavelengths,B_lambda)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-73b866241c49> in <module>
17 B_lambda = ((2*h*c**2)/(lam**5))*((1)/(np.exp((h*c)/(lam*k*T))-1))
18 plt.figure()
---> 19 plt.plot(wavelengths,B_lambda)
20
21
/usr/local/lib/python3.6/dist-packages/matplotlib/pyplot.py in plot(scalex, scaley, data, *args, **kwargs)
2787 return gca().plot(
2788 *args, scalex=scalex, scaley=scaley, **({"data": data} if data
-> 2789 is not None else {}), **kwargs)
2790
2791
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_axes.py in plot(self, scalex, scaley, data, *args, **kwargs)
1663 """
1664 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D._alias_map)
-> 1665 lines = [*self._get_lines(*args, data=data, **kwargs)]
1666 for line in lines:
1667 self.add_line(line)
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_base.py in __call__(self, *args, **kwargs)
223 this += args[0],
224 args = args[1:]
--> 225 yield from self._plot_args(this, kwargs)
226
227 def get_next_color(self):
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_base.py in _plot_args(self, tup, kwargs)
389 x, y = index_of(tup[-1])
390
--> 391 x, y = self._xy_from_xy(x, y)
392
393 if self.command == 'plot':
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_base.py in _xy_from_xy(self, x, y)
268 if x.shape[0] != y.shape[0]:
269 raise ValueError("x and y must have same first dimension, but "
--> 270 "have shapes {} and {}".format(x.shape, y.shape))
271 if x.ndim > 2 or y.ndim > 2:
272 raise ValueError("x and y can be no greater than 2-D, but have "
ValueError: x and y must have same first dimension, but have shapes (29000,) and (1,)```
First thing to note (and this is minor) is that astropy is not required to run your code. So, you can simplify the import statements.
import matplotlib.pylab as plt
import numpy as np
import scipy.constants
%matplotlib inline
#Importing constants to use.
h = scipy.constants.h
c = scipy.constants.c
k = scipy.constants.k
wavelengths= np.arange(1000,30000,100)*1.e-10 # here, I chose steps of 100, because plotting 29000 datapoints takes a while
temperature=[3000,4000,5000,6000]
Secondly, to tidy up the loop a bit, you can write a helper function, that youn call from within you loop:
def f(lam, T):
return ((2*h*c**2)/(lam**5))*((1)/(np.exp((h*c)/(lam*k*T))-1))
now you can collect the output of your function, together with the input parameters, e.g. in a list of tuples:
outputs = []
for lam in wavelengths:
for T in temperature:
outputs.append((lam, T, f(lam, T)))
Since you vary both wavelength and temperature, a 3d plot makes sense:
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.plot(*zip(*outputs))
An alternative would be to display the data as an image, using colour to indicate the function output.
I am also including an alternative method to generate the data in this one. Since the function f can take arrays as input, you can feed one temperature at a time, and with it, all the wavelengths simultaneously.
# initialise output as array with proper shape
outputs = np.zeros((len(wavelengths), len(temperature)))
for i, T in enumerate(temperature):
outputs[:,i] = f(wavelengths, T)
The output now is a large matrix, which you can visualise as an image:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.imshow(outputs, aspect=10e8, interpolation='none',
extent=[
np.min(temperature),
np.max(temperature),
np.max(wavelengths),
np.min(wavelengths)]
)
I m new to data science and python, and jupyter notebook, I m currently studying how to do k means clustering on a data set. I came across ways in which can introduce data
Data = {'x': [25,34,22,27,33,33,31,22,35,34,67,54,57,43,50,57,59,52,65,47,49,48,35,33,44,45,38,43,51,46],
'y': [79,51,53,78,59,74,73,57,69,75,51,32,40,47,53,36,35,58,59,50,25,20,14,12,20,5,29,27,8,7]
}
df = DataFrame(Data,columns=['x','y'])
and use of blobs
data = make_blobs(n_samples=200, n_features=2, centers=4, cluster_std=1.6, random_state=50)
but I would like to know how to do a proper code with a csv file imported from my computer and do a k means with scaling, thank you in advance, I could not find relevant blogs to help me
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.cluster import KMeans
data=pd.read_csv("C:/Users/Dulangi/Downloads/winequality-red.csv")
data
data["alcohol"]=data["alcohol"]/data["alcohol"].max()
data["quality"]=data["quality"]/data["quality"].max()
plt.scatter(data["alcohol"],data['quality'])
plt.xlabel("alcohol")
plt.ylabel('quality')
plt.show()
x=data.copy()
kmeans=KMeans(2)
kmeans.fit(x)
clusters=x.copy()
clusters['cluster_pred']=kmeans.fit_predict(x)
plt.scatter(clusters["alcohol"],clusters['quality'],c=clusters['cluster_pred'],cmap='rainbow')
plt.xlabel("alcohol")
plt.ylabel('quality')
plt.show()
from sklearn import preprocessing
x_scaled=preprocessing.scale(x)
#x_scaled
wcss=[]
for i in range(1,30):
kmeans=KMeans(i)
kmeans.fit(x_scaled)
wcss.append(kmeans.inertia_)
wcss
plt.plot(range(1,30),wcss)
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
This is what i tried
the error i got
ValueError Traceback (most recent call last)
<ipython-input-12-d4955ce8615e> in <module>
39
40
---> 41 plt.plot(range(1,30),wcss)
42 plt.xlabel('Number of clusters')
43 plt.ylabel('WCSS')
~\Anaconda3\lib\site-packages\matplotlib\pyplot.py in plot(scalex, scaley, data, *args, **kwargs)
2787 return gca().plot(
2788 *args, scalex=scalex, scaley=scaley, **({"data": data} if data
-> 2789 is not None else {}), **kwargs)
2790
2791
~\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in plot(self, scalex, scaley, data, *args, **kwargs)
1664 """
1665 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D._alias_map)
-> 1666 lines = [*self._get_lines(*args, data=data, **kwargs)]
1667 for line in lines:
1668 self.add_line(line)
~\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in __call__(self, *args, **kwargs)
223 this += args[0],
224 args = args[1:]
--> 225 yield from self._plot_args(this, kwargs)
226
227 def get_next_color(self):
~\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs)
389 x, y = index_of(tup[-1])
390
--> 391 x, y = self._xy_from_xy(x, y)
392
393 if self.command == 'plot':
~\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _xy_from_xy(self, x, y)
268 if x.shape[0] != y.shape[0]:
269 raise ValueError("x and y must have same first dimension, but "
--> 270 "have shapes {} and {}".format(x.shape, y.shape))
271 if x.ndim > 2 or y.ndim > 2:
272 raise ValueError("x and y can be no greater than 2-D, but have "
ValueError: x and y must have same first dimension, but have shapes (29,) and (1,)
You can easily do by using scikit-Learn
import pandas as pd
data=pd.read_csv('myfile.csv')
df=pd.DataFrame(data,index=None)
df.head()
Check if rows contain any null values
df.isnull().sum()
Drop all the rows with null values if any
df_numeric.dropna(inplace=True)
Normalize data
Normalize the data with MinMax scaling provided by sklearn
from sklearn import preprocessing
minmax_processed = preprocessing.MinMaxScaler().fit_transform(df.drop('title',axis=1))
df_numeric_scaled = pd.DataFrame(minmax_processed, index=df.index, columns=df.columns[:-1])
df_numeric_scaled.head()
from sklearn.cluster import KMeans
Apply K-Means Clustering
What k to choose?
Let's fit cluster size 1 to 20 on our data and take a look at the corresponding score value.
Nc = range(1, 20)
kmeans = [KMeans(n_clusters=i) for i in Nc]
score = [kmeans[i].fit(df_numeric_scaled).score(df_numeric_scaled) for i in range(len(kmeans))]
These score values signify how far our observations are from the cluster center. We want to keep this score value around 0. A large positive or a large negative value would indicate that the cluster center is far from the observations.
Based on these scores value, we plot an Elbow curve to decide which cluster size is optimal. Note that we are dealing with tradeoff between cluster size(hence the computation required) and the relative accuracy.
import matplotlib as pl
pl.plot(Nc,score)
pl.xlabel('Number of Clusters')
pl.ylabel('Score')
pl.title('Elbow Curve')
pl.show()
Fit K-Means for clustering with k=5
kmeans = KMeans(n_clusters=5)
kmeans.fit(df_numeric_scaled)
df['cluster'] = kmeans.labels_
df.head()
I am trying to fit some (numpy) data into python skLearn modules, but keep getting error messages.
When I use the example data-set from iris, where I load it as per below
from sklearn import datasets
iris = datasets.load_diabetes() # load pseudo test data
print(np.shape(iris.data))
print(np.shape(iris.target))
(442, 10)
(442,)
It works fine. But when I use my own data-set which I convert to numpy array, it fails. I cannot figure out why, as I've explicitly converted it into the same shape type as iris
fileLoc = 'C:\\Users\\2018_signal.csv'
data = pd.read_csv(fileLoc)
fl_data = data[['signal', 'sig_dig', 'std_prx']].values
fl_target = data[['actual']].actual.values
ml_data = fl_data[0:int(fraction * len(fl_data))]
ml_target = fl_target[0:int(fraction * len(fl_target))]
print(np.shape(ml_data))
print(np.shape(ml_target))
(6663, 3)
(6663,)
The skLearn code as per below
start_time = time.time()
SKknn_pred = KNeighborsClassifier(n_neighbors=1, algorithm='ball_tree', metric = 'euclidean').fit(ml_data, ml_target).predict(ml_data)
print("knn --- %s seconds ---" % (time.time() - start_time))
print("Number of mislabeled points out of a total %d points : %d" % (fl_data.shape[0],(fl_target != SKknn_pred).sum()))
l_time.append(['knn', 1000 * (time.time() - start_time)])
I get the error message below... Help!!!!!
ValueError Traceback (most recent call last)
<ipython-input-96-91e2b93e2580> in <module>()
57
58 start_time = time.time()
---> 59 SKgnb_pred = GaussianNB().fit(ml_data, ml_target).predict(fl_data)
60 print("gnb --- %s seconds ---" % (time.time() - start_time))
61 print("Number of mislabeled points out of a total %d points : %d" % (fl_data.shape[0],(fl_target != SKgnb_pred).sum()))
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\naive_bayes.py in fit(self, X, y, sample_weight)
183 X, y = check_X_y(X, y)
184 return self._partial_fit(X, y, np.unique(y), _refit=True,
--> 185 sample_weight=sample_weight)
186
187 #staticmethod
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\naive_bayes.py in _partial_fit(self, X, y, classes, _refit, sample_weight)
348 self.classes_ = None
349
--> 350 if _check_partial_fit_first_call(self, classes):
351 # This is the first call to partial_fit:
352 # initialize various cumulative counters
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py in _check_partial_fit_first_call(clf, classes)
319 else:
320 # This is the first call to partial_fit
--> 321 clf.classes_ = unique_labels(classes)
322 return True
323
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py in unique_labels(*ys)
95 _unique_labels = _FN_UNIQUE_LABELS.get(label_type, None)
96 if not _unique_labels:
---> 97 raise ValueError("Unknown label type: %s" % repr(ys))
98
99 ys_labels = set(chain.from_iterable(_unique_labels(y) for y in ys))
ValueError: Unknown label type: (array([-78.375, -67.625, -66.75 , ..., 71.375, 76.75 , 78.1 ]),)
A way to use python to correct for your own error.
from sklearn import preprocessing
from sklearn import utils
ml_target = lab_enc.fit_transform(ml_target)
print(utils.multiclass.type_of_target(ml_target))
print(utils.multiclass.type_of_target(ml_target.astype('float')))
print(utils.multiclass.type_of_target(ml_target))
The skLearn module fits the data after the transform above