fuzzy lookup between 2 series/columns of nonidentical lengths - python-3.x

I am trying to do a fuzzy lookup between 2 series/columns between df1 and df2 where df1 is the dictionary file(to be used as a base) and df2 is the target file(to be looked up on)
import pandas as pd
df1 = pd.DataFrame(data ={'Brand_var':['Altmeister Bitter','Altos Las Hormigas Argentinian Wine','Amadeus Contri Sparkling Wine','Amadeus Cream Liqueur','Amadeus Sparkling Sparkling Wine']})
df2 = pd.DataFrame(data = {'Product':['1960 Altmeister 330ML CAN METAL','Hormi 12 Yr Bottle','test']})
I looked up for some solutions in SO, unfortunately dont seem to find a solution.
Used:
df3 = df2['ProductLongDesc'].apply(lambda x: difflib.get_close_matches(x, df1['Brand_var'])[0])
also :
df3 = df2['Product'].apply(lambda x: difflib.get_close_matches(x, df1['Brand_var']))
The first one gives me an index error and the second one gives me just the indexes.
My desired output is to print a mapping between df1 item and df2 items using a fuzzy lookup and printing both Brand_var and Product for their respective matches.
Desired Output:
Brand_var Product
Altmeister Bitter 1960 Altmeister 330ML CAN METAL
Altos Las Hormigas Argentinian Wine Hormi 12 Yr Bottle
For the non matching items ex: test in df2, can be ignored.
Note: The matching string name also could be non identical, as in it can have 1 or 2 letter missing in it. :(
Thank you in advance for taking your time out for this issue. :)

If you install fuzzywuzzy, you still stay with a problem how to choose proper heuristic to select right prouct and cut those products which are selected incorrectly (explanation below)
install fuzzywuzzy:
pip install fuzzywuzzy
fuzzywuzzy has several methods for a ratio calculation (examples on github). You face the problem: how to choose the best? I tried them on your data, but all of them faliled.
Code:
import pandas as pd
import numpy as np
from fuzzywuzzy import fuzz
# df1 = ...
# df2 = ...
def get_top_by_ratio(x, df2):
product_values = df2.Product.values
# compare two strings by characters
ratio = np.array([fuzz.partial_ratio(x, val) for val in product_values])
argmax = np.argmax(ratio)
rating = ratio[argmax]
linked_product = product_values[argmax]
return rating, linked_product
Aplly this function to your data:
partial_ratio = (df1.Brand_var.apply(lambda x: get_top_by_ratio(x, df2))
.apply(pd.Series) # convert returned Series of tuples into pd.DataFrame
.rename(columns={0: 'ratio', 1: 'Product'})) # just rename columns
print(partial_ratio)
Out:
0 65 1960 Altmeister 330ML CAN METAL # Altmeister Bitter
1 50 test # Altos Las Hormigas Argentinian Wine
2 33 test
3 50 test
4 50 test
That's not good. Other ratio methods as fuzz.ratio, fuzz.token_sort_ratio etc. had failed too.
So I guess extend heuristic to compare words not only characters might help. Define a function that will create vocabulary from your data, encode all the sentences and use more sophisticated heuristic looking for words too:
def create_vocab(df1, df2):
# Leave 0 index free for unknow words
all_words = set((df1.Brand_var.str.cat(sep=' ') + df2.Product.str.cat(sep=' ')).split())
vocab = dict([(i + 1, w) for i, w in enumerate(all_words)])
return vocab
def encode(string, vocab):
"""This function encodes a sting with vocabulary"""
return [vocab[w] if w in vocab else 0 for w in string.split()]
Define new heuristic:
def get_top_with_heuristic(x, df2, vocab):
product_values = df2.Product.values
# compare two strings by characters
ratio_per_char = np.array([fuzz.partial_ratio(x, val) for val in product_values])
# compare two string by words
ratio_per_word = np.array([fuzz.partial_ratio(x, encode(val, vocab)) for val in product_values])
ratio = ratio_per_char + ratio_per_word
argmax = np.argmax(ratio)
rating = ratio[argmax]
linked_product = product_values[argmax]
return rating, linked_product
Create vocabulary, apply sophisticated heuristic to the data:
vocab = create_vocab(df1, df2)
heuristic_rating = (df1.Brand_var.apply(lambda x: get_top_with_heuristic(x, df2, vocab))
.apply(pd.Series)
.rename(columns={0: 'ratio', 1: 'Product'}))
print(heuristic_rating)
Out:
ratio Product
0 73 1960 Altmeister 330ML CAN METAL # Altmeister Bitter
1 61 Hormi 12 Yr Bottle # Altos Las Hormigas Argentinian Wine
2 45 Hormi 12 Yr Bottle
3 50 test
4 50 test
It seems to be correct! Concatenate this dataframe to df1, change index:
result_heuristic = pd.concat((df1, heuristic_rating), axis=1).set_index('Brand_var')
print(result_heuristic)
Out:
ratio Product
Brand_var
Altmeister Bitter 73 1960 Altmeister 330ML CAN METAL
Altos Las Hormigas Argentinian Wine 61 Hormi 12 Yr Bottle
Amadeus Contri Sparkling Wine 45 Hormi 12 Yr Bottle
Amadeus Cream Liqueur 50 test
Amadeus Sparkling Sparkling Wine 50 test
Now you should choose some rule of the thumb to cut incorrect data. For this example ratio <= 50 works good, but you probably need some research to define best heuristic and correct threshold. Also you will get some errors anyway. Choose acceptable error rate ,i.e 2%, 5% ... and improve your algorithm until you reach it (This task is similar to validation of machine learning classification algorithms).
Cut incorrect "predictions":
result = result_heuristic[result_heuristic.ratio > 50][['Product']]
print(result)
Out: Product
Brand_var
Altmeister Bitter 1960 Altmeister 330ML CAN METAL
Altos Las Hormigas Argentinian Wine Hormi 12 Yr Bottle
Hope it helps!
P.S. of course, this algorithm is very very slow, when you'optimize' it you should do some optimizations, for example, cache the diffs etc.

Related

Performing a Principal Component Analysis to reconstruct time series creates more values than expected

I want to do a Principal Component Analysis following this notebook to reconstruct the DJIA (I'm using alpha_ventage) from its components (found with Quandl). Yet, it seems that I create more values than expected, than the original dataframe, when reconstructing the values multiplying the principal components by their weights
kernel_pca = KernelPCA(n_components=5).fit(df_z_components)
pca_5 = kernel_pca.transform(-daily_df_components)
weights = fn_weighted_average(kernel_pca.lambdas_)
reconstructed_values = np.dot(pca_5, weights)
Indeed, daily_df_components is created from the components of the DJIA by the quandl API which seem to have more data than the library I use to get the DJIA Index, alpha_ventage.
Here is the full code
"""
Obtaining the components data from quandl
"""
import quandl
QUANDL_API_KEY = 'MYKEY'
quandl.ApiConfig.api_key = QUANDL_API_KEY
SYMBOLS = [
'AAPL', 'MMM', 'BA', 'AXP', 'CAT',
'CVX', 'CSCO', 'KO', 'DD', 'XOM',
'GS', 'HD', 'IBM', 'INTC', 'JNJ',
'JPM', 'MCD', 'MRK', 'MSFT', 'NKE',
'PFE', 'PG', 'UNH', 'UTX', 'TRV',
'VZ', 'V', 'WMT', 'WBA', 'DIS'
]
wiki_symbols = ['WIKI/%s'%symbol for symbol in SYMBOLS]
df_components = quandl.get(
wiki_symbols,
start_date='2017-01-01',
end_date='2017-12-31',
column_index=11)
df_components.columns = SYMBOLS
filled_df_components = df_components.fillna(method='ffill')
daily_df_components = filled_df_components.resample('24h').ffill()
daily_df_components = daily_df_components.fillna(method='bfill')
"""
Download the all-time DJIA dataset
"""
from alpha_vantage.timeseries import TimeSeries
# Update your Alpha Vantage API key here...
ALPHA_VANTAGE_API_KEY = 'MYKEY'
ts = TimeSeries(key=ALPHA_VANTAGE_API_KEY, output_format='pandas')
df, meta_data = ts.get_intraday(symbol='DIA',interval='1min', outputsize='full')
# Finding eigenvectors and eigen values
fn_weighted_average = lambda x: x/x.sum()
weighted_values = fn_weighted_average(fitted_pca.lambdas_)[:5]
from sklearn.decomposition import KernelPCA
fn_z_score = lambda x: (x - x.mean())/x.std()
df_z_components = daily_df_components.apply(fn_z_score)
fitted_pca = KernelPCA().fit(df_z_components)
# Reconstructing the Dow Average with PCA
import numpy as np
kernel_pca = KernelPCA(n_components=5).fit(df_z_components)
pca_5 = kernel_pca.transform(-daily_df_components)
weights = fn_weighted_average(kernel_pca.lambdas_)
reconstructed_values = np.dot(pca_5, weights)
# Combine PCA and Index to compare
df_combined = djia_2020_weird.copy()
df_combined['pca_5'] = reconstructed_values
But it returns:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-100-2808dc14f789> in <module>()
9 # Combine PCA and Index to compare
10 df_combined = djia_2020_weird.copy()
---> 11 df_combined['pca_5'] = reconstructed_values
12 df_combined = df_combined.apply(fn_z_score)
13 df_combined.plot(figsize=(12,8));
3 frames
/usr/local/lib/python3.6/dist-packages/pandas/core/internals/construction.py in sanitize_index(data, index)
746 if len(data) != len(index):
747 raise ValueError(
--> 748 "Length of values "
749 f"({len(data)}) "
750 "does not match length of index "
ValueError: Length of values (361) does not match length of index (14)
Indeed, reconstructed_values is 361 long and df_combined is 14 values long...
Here is this last dataframe:
DJI
date
2021-01-21 NaN
2021-01-22 311.37
2021-01-23 310.03
2021-01-24 310.03
2021-01-25 310.03
2021-01-26 309.01
2021-01-27 309.49
2021-01-28 302.17
2021-01-29 305.25
2021-01-30 299.20
2021-01-31 299.20
2021-02-01 299.20
2021-02-02 302.13
2021-02-03 307.86
Maybe the reason is that the notebook author was available to get the data for the whole year he was interested in, when I run the data it seems that I only have two months?
Ahoy there, I'm the author of the notebook. It seems Quandl no longer provides historical prices of DJIA after the time of writing, and copyright wasn't granted to redistribute the data. For research, you may consider other free stock tickers to proxy DJIA.
The example usages have been updated in the repo to demostrate KernelPCA, as explained here.

KeyError: "None of [Index(['23/01/2020' ......,\n dtype='object', length=9050)] are in the [columns]"

I am learning pandas and matplotlib on my own by using some public dataset via
this api link
I'm using colab and below are my codes:
import datetime
import io
import json
import pandas as pd
import requests
import matplotlib.pyplot as plt
confirm_resp = requests.get('https://api.data.gov.hk/v2/filterq=%7B%22resource%22%3A%22http%3A%2F%2Fwww.chp.gov.hk%2Ffiles%2Fmisc%2Fenhanced_sur_covid_19_eng.csv%22%2 C%22section%22%3A1%2C%22format%22%3A%22json%22%7D').content
confirm_df = pd.read_json(io.StringIO(confirm_resp.decode('utf-8')))
confirm_df.columns = confirm_df.columns.str.replace(" ", "_")
pd.to_datetime(confirm_df['Report_date'])
confirm_df.columns = ['Case_no', 'Report_date', 'Onset_date', 'Gender', 'Age',
'Name_of_hospital_admitted', 'Status', 'Resident', 'Case_classification', 'Confirmed_probable']
confirm_df = confirm_df.drop('Name_of_hospital_admitted', axis = 1)
confirm_df.head()
and this is what the dataframe looks like:
Case_no
Report_date
Onset_date
Gender
Age
Status
Resident
Case_classification
Confirmed_probable
1
23/01/2020
21/01/2020
M
39
Discharged
Non-HK resident
Imported case
Confirmed
2
23/01/2020
18/01/2020
M
56
Discharged
HK resident
Imported case
Confirmed
3
24/01/2020
20/01/2020
F
62
Discharged
Non-HK resident
Imported case
Confirmed
4
24/01/2020
23/01/2020
F
62
Discharged
Non-HK resident
Imported case
Confirmed
5
24/01/2020
23/01/2020
M
63
Discharged
Non-HK resident
Imported case
Confirmed
When I try to make a simple plot with the below code:
x = confirm_df['Report_date']
y = confirm_df['Case_classification']
confirm_df.plot(x, y)
It gives me the below error:
KeyError Traceback (most recent call last)
<ipython-input-17-e4139a9b5ef1> in <module>()
4 y = confirm_df['Case_classification']
5
----> 6 confirm_df.plot(x, y)
3 frames
/usr/local/lib/python3.6/dist-packages/pandas/plotting/_core.py in __call__(self, *args, **kwargs)
912 if is_integer(x) and not data.columns.holds_integer():
913 x = data_cols[x]
--> 914 elif not isinstance(data[x], ABCSeries):
915 raise ValueError("x must be a label or position")
916 data = data.set_index(x)
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py in __getitem__(self, key)
2910 if is_iterator(key):
2911 key = list(key)
-> 2912 indexer = self.loc._get_listlike_indexer(key, axis=1, raise_missing=True)[1]
2913
2914 # take() does not accept boolean indexers
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _get_listlike_indexer(self, key, axis, raise_missing)
1252 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr)
1253
-> 1254 self._validate_read_indexer(keyarr, indexer, axis, raise_missing=raise_missing)
1255 return keyarr, indexer
1256
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing)
1296 if missing == len(indexer):
1297 axis_name = self.obj._get_axis_name(axis)
-> 1298 raise KeyError(f"None of [{key}] are in the [{axis_name}]")
1299
1300 # We (temporarily) allow for some missing keys with .loc, except in
KeyError: "*None of [Index(['23/01/2020', '23/01/2020', '24/01/2020', '24/01/2020', '24/01/2020',\n '26/01/2020', '26/01/2020', '26/01/2020', '29/01/2020', '29/01/2020',\n ...\n '05/01/2021', '05/01/2021', '05/01/2021', '05/01/2021', '05/01/2021',\n '05/01/2021', '05/01/2021', '05/01/2021', '05/01/2021', '05/01/2021'],\n dtype='object', length=9050)] are in the [column*s]"
I have tried to make the plot with and without converting Report date to datetime object, I tried substitute x value with all the columns in the data frame, but all give me the same error code.
Appreciate if anyone can help me to understand how to handle these issues here and going forward. I've spent hours to resolve it but cannot find the answers.
I did not encounter this issue before when I downloaded some notebooks and datasets from Kaggle to follow along.
Thank you and happy new year.
First, you need to assign the converted date back to the column:
confirm_df['Report_date'] = pd.to_datetime(confirm_df['Report_date'])
Second, When the plot method is called from a dataframe object, you need to provide only the column names as argument (1).
confirm_df.plot(x='Report_date', y='Case_classification')
But the above code still throws error because 'Case_classification' is not numeric data.
You are trying to plot datetime vs. categorical data, so normal plot won't work but Something like this could work (2):
# I used only first 15 examples here, full dataset is kinda messy
confirm_df.iloc[:15, :].groupby(['Report_date', 'Case_classification']).size().unstack().plot.bar()
(1)pandas.DataFrame.plot
(2)How to plot categorical variable against a date column in Python
Several problems. First, the links were incorrect, I have edited them (probably just a copy/paste error). Second, you have to assign the converted datetime series back to the dataframe. Use print(confirm_df.dtypes) to see the difference. Then, the dataset is not ordered by date, but matplotlib expects an ordered x-axis. Well, actually, the problem was that the parser misinterpreted the datetime objects. I have added dayfirst=True to ensure that the dates are read correctly. Finally, what do you want to plot here? Just the cases by date? The number of cases per group by date? Your original code implies just the former but this is not really informative, is it?
import io
import pandas as pd
import requests
import matplotlib.pyplot as plt
print("starting download")
confirm_resp = requests.get('https://api.data.gov.hk/v2/filter?q=%7B%22resource%22%3A%22http%3A%2F%2Fwww.chp.gov.hk%2Ffiles%2Fmisc%2Fenhanced_sur_covid_19_eng.csv%22%2C%22section%22%3A1%2C%22format%22%3A%22json%22%7D').content
print("finished download")
confirm_df = pd.read_json(io.StringIO(confirm_resp.decode('utf-8')))
confirm_df.columns = confirm_df.columns.str.replace(" ", "_")
confirm_df['Report_date'] = pd.to_datetime(confirm_df['Report_date'], dayfirst=True)
confirm_df.columns = ['Case_no', 'Report_date', 'Onset_date', 'Gender', 'Age',
'Name_of_hospital_admitted', 'Status', 'Resident', 'Case_classification', 'Confirmed_probable']
confirm_df = confirm_df.drop('Name_of_hospital_admitted', axis = 1)
print(confirm_df.dtypes)
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(confirm_df['Report_date'], confirm_df['Case_classification'])
plt.tight_layout()
plt.show()
Sample output:
Some grouping and data aggregation might be more informative, but you have to decide what you want to display first before writing the code.

Problem with negative numbers in sklearn.feature_selection.SelectKBest feautre scoring module

I was trying auto feature engineering and selecting, so for that, I used the Boston house price dataset available in sklearn.
from sklearn.datasets import load_boston
import pandas as pd
data = load_boston()
x = data.data
y= data.target
y = pd.DataFrame(y)
Then I implemented the feature transformation library on the dataset.
import autofeat as af
clf = af.AutoFeatRegressor()
df = clf.fit_transform(x,y)
df = pd.DataFrame(df)
After this, I implemented another function to find the score of each feature in relation to the label.
from sklearn.feature_selection import SelectKBest, chi2
X_new = SelectKBest(chi2, k=20)
X_new_done = X_new.fit_transform(df,y)
dfscores = pd.DataFrame(X_new.scores_)
dfcolumns = pd.DataFrame(X_new_done.columns)
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score']
print(featureScores.nlargest(10,'Score'))
This gave error as following.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-b0fa1556bdef> in <module>()
1 from sklearn.feature_selection import SelectKBest, chi2
2 X_new = SelectKBest(chi2, k=20)
----> 3 X_new_done = X_new.fit_transform(df,y)
4 dfscores = pd.DataFrame(X_new.scores_)
5 dfcolumns = pd.DataFrame(X_new_done.columns)
ValueError: Input X must be non-negative.
I had a few negative numbers in my dataset. So how can I overcome this problem?
Note:- df has now transformations of y, its only having transformations of x.
You have a feature with all negative values:
df['exp(x005)*log(x000)']
returns
0 -3630.638503
1 -2212.931477
2 -4751.790753
3 -3754.508972
4 -3395.387438
...
501 -2022.382877
502 -1407.856591
503 -2998.638158
504 -1973.273347
505 -1267.482741
Name: exp(x005)*log(x000), Length: 506, dtype: float64
Quoting another answer (https://stackoverflow.com/a/46608239/5025009):
The error message Input X must be non-negative says it all: Pearson's chi square test (goodness of fit) does not apply to negative values. It's logical because the chi square test assumes frequencies distribution and a frequency can't be a negative number. Consequently, sklearn.feature_selection.chi2 asserts the input is non-negative.
In many cases, it may be quite safe to simply shift each feature to make it all positive, or even normalize to [0, 1] interval as suggested by EdChum.
If data transformation is for some reason not possible (e.g. a negative value is an important factor), you should pick another statistic to score your features:
sklearn.feature_selection.f_regression computes ANOVA f-value
sklearn.feature_selection.mutual_info_classif computes the mutual information
Since the whole point of this procedure is to prepare the features for another method, it's not a big deal to pick anyone, the end result usually the same or very close.

For loop in DataFrame

I have multiples files with a lot of data and 19 columns. I am trying to to multiple for-loop and set it equal the first column, second etc. in the files.
import numpy as np
import glob
import pandas as pd
#
lat=np.zeros(90)
long=np.zeros(180)
indat=np.zeros(19)
#
file_in = glob.glob('filenames*.dat').
for a in range(140):
for i in range (90):
for j in range (180):
df = pd.DataFrame()
for f in file_in:
cols = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18] #there are nineteen columns
indat = df.append(pd.read_csv(f, delimiter='\\s+', header=None, usecols=cols, skiprows=4), ignore_index=True)
lat[i]=indat[0] # error here
long[j]=indat[1]
#updates some code here
if i >=70:
dens[a,j,i-70]=indat[2]
It gave me this error:
ValueError: setting an array element with a sequence.
Updates:
indat has 19 columns, many files but all the format is the same.
Sample indat
#columns
#0 1 2 3 ..... 19
-90 0 2e-12 #just some number
-90 2 3e-12 #just some number
-90 4 4e-12 #just some number
...
-90 360 1e-12 #just some number
-88 0 1e-11 #just some number
-88 2 2e-11 #just some number
-88 4 3e-11 #just some number
...
-88 360 4e-11 #just some number
...
90 0 2.5e-12 #just some number
90 2 3.5e-11 #just some number
90 4 4.5e-12 #just some number
...
90 360 1.5e-12 #just some number
EDIT: I clean the code up based on everyone suggestions
import numpy as np
import glob
import pandas as pd
file_in = glob.glob('filenames*.dat').
df = pd.DataFrame()
for f in file_in:
cols = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
indat = pd.read_csv(f, delimiter='\\s+', header=None, usecols=cols, skiprows=4)
for a in range(140):
for i in range (90):
for j in range (180):
lat[i]=indat[0] # error here
long[j]=indat[1]
if i >=70:
dens[a,j,i-70]=indat[2]
you tried to assign a column (pandas series) indat[0] to an element of a numpy vector lat[i]
Also what the point of indat=np.zeros(19) when you override it to be a dataframe later?
What is the content of indat[0]?
This line of code
indat = df.append(pd.read_csv(f, delimiter='\\s+', header=None, usecols=cols, skiprows=4), ignore_index=True)
is basically same as
indat = pd.read_csv(f, delimiter='\\s+', header=None, usecols=cols, skiprows=4)
because df never changed, i.e. it is always an empty dataframe
Since the content of indat is unknown, it's difficult to fix your code.
If you just want to make it run without error, I suggest to write
lat[i] = indat[0].values[0] # take the first value of the vector
long[i] = indat[1].values[0] # take the first value of the vector
It's good to take some tutorial on Numpy and Pandas since it can be very confusing without some basic understanding.

getting indices in numpy

Can someone find out what is wrong with the code below?
import numpy as np
data = np.recfromcsv("data.txt", delimiter=" ", names=['name', 'types', 'value'])
indices = np.where((data.name == 'david') * data.types.startswith('height'))
mean_value = np.mean(data.value[indices])
I want to calculate mean of weight and height for david and mark as follows:
david>> mean(weight_2005 and weight_2012), mean (height_2005 and height_2012)
mark>> mean(weight_2005 and weight_2012), mean (height_2005 and height_2012)
From the text (data.txt) file:
david weight_2005 50
david weight_2012 60
david height_2005 150
david height_2012 160
mark weight_2005 90
mark weight_2012 85
mark height_2005 160
mark height_2012 170
I am using python 3.2 and numpy 1.8
The above code provides the type error as follows:
TypeError: startswith first arg must be bytes or a tuple of bytes, not numpy.str_
With Python3.2 and numpy 1.7, this line works
indices = np.where((data.name == b'david') * data.types.startswith(b'height'))
data displays as:
rec.array([(b'david', b'weight_2005', 50),...],
dtype=[('name', 'S5'), ('types', 'S11'), ('value', '<i4')])
type(data.name[0]) is <class 'bytes'>.
b'height' works in Python2.7 as well.
another option is to convert all the data to unicode (Python 3 strings)
dtype=[('name','U5'), ('types', 'U11'), ('value', '<i4')]
dataU=data.astype(dtype=dtype)
indices = np.where((dataU.name == 'david') * dataU.types.startswith('height'))
or
data = np.recfromtxt('data.txt', delimiter=" ",
names=['name', 'types', 'value'], dtype=dtype)
It looks like recfromcsv does not take a dtype, but recfromtxt does.

Resources