How to encode multiple categorical columns for test data efficiently? - python-3.x

I have multiple category columns (nearly 50). I using custom made frequency encoding and using it on training data. At last i am saving it as nested dictionary. For the test data I am using map function to encode and unseen labels are replaced with 0. But I need more efficient way?
I have already tried pandas replace method but it don't cares of unseen labels and leaves it as it. Further I am much concerned about the time and i want say 80 columns and 1 row to be encoded within 60 ms. Just need the most efficient way I can do it. I have taken my example from here.
import pandas
from sklearn import preprocessing
df = pandas.DataFrame({'pets': ['cat', 'dog', 'cat', 'monkey', 'dog', 'meo'],
'owner': ['Champ', 'Ron', 'Brick', 'Champ', 'Veronica', 'Ron'],
'location': ['San_Diego', 'New_York', 'New_York', 'San_Diego', 'San_Diego',
'New_York']})
My dict looks something like this :
enc = {'pets': {'cat': 0, 'dog': 1, 'monkey': 2},
'owner': {'Brick': 0, 'Champ': 1, 'Ron': 2, 'Veronica': 3},
'location': {'New_York': 0, 'San_Diego': 1}}
for col in enc:
if col in input_df.columns:
input_df[col]= input_df[col].map(dict_online['encoding'][col]).fillna(0)
Further I want multiple columns to be encoded at once. I don't want any loop for every column.... I guess we cant do it in map. Hence replace is good choice but in that as said it doesn't cares about unseen labels.
EDIT:
This the code i am using for now, Please note there is only 1 row in test data frame ( Not very sure i should handle it like numpy array to reduce time...). But i need to decrease this time to under 60 ms: Further i have dictionary only for mapping ( Cant use one hot because of use case). Currently time = 331.74 ms. Any idea how to do it more efficiently. Not sure that multiprocessing will work..? Further with replace method i have got many issues like : 1. It does not handle unseen labels and leave them as it is ( for string its issue). 2. It has problem with overlapping of keys and values.
from string import ascii_lowercase
import itertools
import pandas as pd
import numpy as np
import time
def iter_all_strings():
for size in itertools.count(1):
for s in itertools.product(ascii_lowercase, repeat=size):
yield "".join(s)
l = []
for s in iter_all_strings():
l.append(s)
if s == 'gr':
break
columns = l
df = pd.DataFrame(columns=columns)
for col in df.columns:
df[col] = np.random.randint(1, 4000, 3000)
transform_dict = {}
for col in df.columns:
cats = pd.Categorical(df[col]).categories
d = {}
for i, cat in enumerate(cats):
d[cat] = i
transform_dict[col] = d
print(f"The length of the dictionary is {len(transform_dict)}")
# Creating another test data frame
df2 = pd.DataFrame(columns=columns)
for col in df2.columns:
df2[col] = np.random.randint(1, 4000, 1)
print(f"The shape of teh 2nd data frame is {df2.shape}")
t1 = time.time()
for col in df2.columns:
df2[col] = df2[col].map(transform_dict[col]).fillna(0)
print(f"Time taken is {time.time() - t1}")
# print(df)

Firstly, when you want to encode categorical variables, which is not ordinal (meaning: there is no inherent ordering between the values of the variable/column. ex- cat, dog), you must use one hot encoding.
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
df = pd.DataFrame({'pets': ['cat', 'dog', 'cat', 'monkey', 'dog', 'meo'],
'owner': ['Champ', 'Ron', 'Brick', 'Champ', 'Veronica', 'Ron'],
'location': ['San_Diego', 'New_York', 'New_York', 'San_Diego', 'San_Diego',
'New_York']})
enc = [['cat','dog','monkey'],
['Brick', 'Champ', 'Ron', 'Veronica'],
['New_York', 'San_Diego']]
ohe = OneHotEncoder(categories=enc, handle_unknown='ignore', sparse=False)
Here, I have modified your enc in a way that can be fed into the OneHotEncoder.
Now comes the point of how can we going to handle the unseen
labels?
when you handle_unknown as False, the unseen values will have zeros in all the dummy variables, which in a way would help the model to understand its a unknown value.
colnames= ['{}_{}'.format(col,val) for col,unique_values in zip(df.columns,ohe.categories_) \
for val in unique_values]
pd.DataFrame(ohe.fit_transform(df), columns=colnames)
Update:
If you are fine with ordinal endocing, the following change could help.
df2.apply(lambda row: [transform_dict[val].get(col,0) \
for val,col in row.items()],
axis=1,
result_type='expand')
#1000 loops, best of 3: 1.17 ms per loop

Related

how to use ColumnTransformer() to return a dataframe?

I have a dataframe like this:
department review projects salary satisfaction bonus avg_hrs_month left
0 operations 0.577569 3 low 0.626759 0 180.866070 0
1 operations 0.751900 3 medium 0.443679 0 182.708149 0
2 support 0.722548 3 medium 0.446823 0 184.416084 0
3 logistics 0.675158 4 high 0.440139 0 188.707545 0
4 sales 0.676203 3 high 0.577607 1 179.821083 0
I want to try ColumnTransformer() and return a transformed dataframe.
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(
transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features ),
]
)
df_new = ct.fit_transform(df)
df_new
which gives me a 'sparse matrix of type '<class 'numpy.float64'>'
if I use pd.DataFrame(ct.fit_transform(df)) then I'm getting a single column:
0
0 (0, 0)\t1.0\n (0, 7)\t1.0
1 (0, 0)\t2.0\n (0, 7)\t1.0
2 (0, 0)\t2.0\n (0, 10)\t1.0
3 (0, 5)\t1.0
4 (0, 9)\t1.0
however, I was expecting to see the transformed dataframe like this?
review projects salary satisfaction bonus avg_hrs_month operations support ...
0 0.577569 3 1 0.626759 0 180.866070 1 0
1 0.751900 3 2 0.443679 0 182.708149 1 0
2 0.722548 3 2 0.446823 0 184.416084 0 1
3 0.675158 4 3 0.440139 0 188.707545 0 0
4 0.676203 3 3 0.577607 1 179.821083 0 0
Is it possible with ColumnTransformer()?
As quickly sketched in the comment there are a couple of considerations to be done on your example:
method .fit_transform() generally returns either a sparse matrix or a numpy array. Returning a sparse matrix serves the purpose of saving memory; think to the example where you one-hot-encode a categorical attribute with many categories. You'll end up having a matrix with many columns and a single non-zero entry per row; with a sparse matrix you can store the location of the non-zero element only. In these situation you can call .toarray() on the output of .fit_transform() to get a numpy array back to be passed to the pd.DataFrame constructor.
Actually, on a five-rows dataset similar to the one you provided
df = pd.DataFrame({
'department': ['operations', 'operations', 'support', 'logistics', 'sales'],
'review': [0.577569, 0.751900, 0.722548, 0.675158, 0.676203],
'projects': [3, 3, 3, 4, 3],
'salary': ['low', 'medium', 'medium', 'low', 'high'],
'satisfaction': [0.626759, 0.751900, 0.722548, 0.675158, 0.676203],
'bonus': [0, 0, 0, 0, 1],
'avg_hrs_month': [180.866070, 182.708149, 184.416084, 188.707545, 179.821083],
'left': [0, 0, 1, 0, 0]
})
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features),
])
I can't reproduce your issue (namely, I directly obtain a numpy array), but basically pd.DataFrame(ct.fit_transform(df).toarray()) should work for your case. This is the output you would get:
As you can see, with respect to your expected output, this only contains the transformed (ordinally encoded) salary column as first column and the transformed (one-hot-encoded) department column from the second to the last column. That's because, as you can see within the docs, parameter remainder is set to 'drop' by default, which implies that all columns which are not subject to transformation are dropped. To avoid this, you should set it to 'passthrough'; this will help you to transform the columns you need and keep the other untouched.
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features )],
remainder='passthrough'
)
This would be the output of your pd.DataFrame(ct.fit_transform(df).toarray()) in such a case:
Again, as you can see also column order is not the one you would expect after the transformation. Long story short, that's because in a ColumnTransformer
The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the transformers list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the passthrough keyword. Those columns specified with passthrough are added at the right to the output of the transformers.
I would aggest reading Preserve column order after applying sklearn.compose.ColumnTransformer at this proposal.
Eventually, for what concerns column names you should probably apply a custom solution passing what you want directly to the columns parameter to be passed to the pd.DataFrame constructor. Indeed, OrdinalEncoder (differently from OneHotEncoder) does not provide a .get_feature_names_out() method that makes it generally easy to pass columns=ct.get_feature_names_out() to the pd.DataFrame constructor. See ColumnTransformer & Pipeline with OHE - Is the OHE encoded field retained or removed after ct is performed? for an example of its usage.
Update 10/2022 - sklearn version 1.2.dev0
With sklearn version 1.2.0 it will be possible to solve the problem of returning a DataFrame when transforming a ColumnTransformer instance much more easily. Such version has not been released yet, but you can test the following in dev (version 1.2.dev0), by installing the nightly builds as such:
pip install --pre --extra-index https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn -U
The ColumnTransformer (and other transformers as well) now exposes a .set_output() method which gives the possibility to configure a transformer to output pandas DataFrames, by passing parameter transform='pandas' to it.
Therefore, the example becomes:
import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, OrdinalEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
df = pd.DataFrame({
'department': ['operations', 'operations', 'support', 'logistics', 'sales'],
'review': [0.577569, 0.751900, 0.722548, 0.675158, 0.676203],
'projects': [3, 3, 3, 4, 3],
'salary': ['low', 'medium', 'medium', 'low', 'high'],
'satisfaction': [0.626759, 0.751900, 0.722548, 0.675158, 0.676203],
'bonus': [0, 0, 0, 0, 1],
'avg_hrs_month': [180.866070, 182.708149, 184.416084, 188.707545, 179.821083],
'left': [0, 0, 1, 0, 0]
})
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(sparse_output=False, handle_unknown="ignore")
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features )],
remainder='passthrough'
)
ct.set_output('pandas')
df_pandas = ct.fit_transform(df)
df_pandas
The output also becomes much easier to read as it has proper column names (indeed, at each step, the transformers of which ColumnTransformer is made of do have the attribute feature_names_in_; so you don't lose column names anymore while transforming the input).
Last note. Observe that the example now requires parameter sparse_output=False to be passed to the OneHotEncoder instance in order to work.
This answer skips the workaround and directly provides a solution for scikit-learn version 1.2+
From sklearn version 1.2 on, transformers can return a pandas DataFrame directly without further handling. It is done with set_output, which can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas"). See Release Highlights for scikit-learn 1.2 - Pandas output with set_output API
In your case the solution would be:
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(
transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features ),
]
)
# Add the following line to your code
ct.set_output(transform="pandas")
df_new = ct.fit_transform(df)
df_new

How to find line-sequence from pandas.dataframe, python3

I have several lines defined in a dataframe.
import pandas as pd
df = pd.DataFrame(
{
'from': ['p2', 'p3', 'p1'],
'to': ['p3', 'p4', 'p2'],
},
index=['line_b', 'line_c', 'line_a'],
)
# How to get line_sequence as ['line_a', 'line_b', 'line_c']?
Each line has a from point and a to point. These lines are connected in certain sequence. In this example, the sequence is line_a --> line_b --> line_c.
Could you please show me how to quickly find the connection sequence based on the columns of from and to? In the example above, there are numbers in the points' names, like 'p1' and 'p2'. It is just an example. In my real application, the names could be any string.
The expected outcome should be in the format of List[str].
Thanks.
Using networkx:
import pandas as pd
import networkx as nx
df = pd.DataFrame({'from': ['p2', 'p3', 'p1'], 'to': ['p3', 'p4', 'p2'], }, index=['line_b', 'line_c', 'line_a'])
# create graph
df = df.reset_index()
G = nx.from_pandas_edgelist(df, "from", "to", edge_attr="index", create_using=nx.DiGraph)
# store edge attributes
index = nx.get_edge_attributes(G, "index")
# find topological sort
order = nx.topological_sort(nx.line_graph(G))
# map to index
res = [index[edge] for edge in order]
print(res)
Output
['line_a', 'line_b', 'line_c']

How do I make my algo work with KNN text classification?

Trying to make my classification accepting a text (string) and not just a number (numeric). Working with data, carrying a load of pulled articles, I want the classification algo to show which ones to proceed with and which ones to drop. Applying a number, things are working just fine, yet this is not very intuitive, although I know that the number represents a relationship to one of the two classes I am working with.
How do I change the logic in the algo to make it accept a text as search criteria and not just an anonymous number, picked from the 'Unique_id' column? Columns are, btw...'Title', 'Abstract', 'Relevant', 'Label', 'Unique_id'. The reason for concatenating df's at algo end is that I want to compare results. Finally. it should be noted that the col 'Label' consists of a list of keywords, so basically I want the algo to read from that col.
I did try, reading from data sources, changing the 'index_col='Unique_id' to 'index_col='Label', but that did not work out either.
An example of what I want:
print("\nPrint KNN1")
print(get_closest_neighs1('search word'), "\n")
print("\nPrint KNN2")
print(get_closest_neighs2('search word'), "\n")
print("\nPrint KNN3")
print(get_closest_neighs3('search word'), "\n")
This is the full code (view end of algo to see above example as it runs today, using a number to identify nearest neighbor):
import pandas as pd
print("\nPerforming Analysis using Text Classification")
data = pd.read_csv('File_1_coltest_demo.csv', sep=';', encoding="ISO-8859-1").dropna()
data['Unique_id'] = data.groupby(['Title', 'Abstract', 'Relevant']).ngroup()
data.to_csv('File_2_coltest_demo_KNN.csv', sep=';', encoding="ISO-8859-1", index=False)
data1 = pd.read_csv('File_2_coltest_demo_KNN.csv', sep=';', encoding="ISO-8859-1", index_col='Unique_id')
data2 = pd.DataFrame(data1, columns=['Abstract', 'Relevant'])
data2.to_csv('File_3_coltest_demo_KNN_reduced.csv', sep=';', encoding="ISO-8859-1", index=False)
print("\nData top 25 items")
print(data2.head(25))
print("\nData info")
print(data2.info())
print("\nData columns")
print(data2.columns)
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import RegexpTokenizer
token = RegexpTokenizer(r'[a-zA-Z0-9]+')
cv = CountVectorizer(lowercase=True, stop_words='english', ngram_range=(1, 1), tokenizer=token.tokenize)
text_counts = cv.fit_transform(data2['Abstract'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
text_counts, data2['Abstract'], test_size=0.5, random_state=1)
print("\nTF IDF")
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer()
text_tf = tf.fit_transform(data2['Abstract'])
print(text_tf)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
text_tf, data2['Abstract'], test_size=0.3, random_state=123)
from sklearn.neighbors import NearestNeighbors
import pandas as pd
nbrs = NearestNeighbors(n_neighbors=20, metric='euclidean').fit(text_tf)
def get_closest_neighs1(Abstract):
row = data2.index.get_loc(Abstract)
distances, indices = nbrs.kneighbors(text_tf.getrow(row))
names_similar = pd.Series(indices.flatten()).map(data2.reset_index()['Abstract'])
result = pd.DataFrame({'distance1' : distances.flatten(), 'Abstract' : names_similar}) # 'Unique_id' : names_similar,
return result
def get_closest_neighs2(Unique_id):
row = data2.index.get_loc(Unique_id)
distances, indices = nbrs.kneighbors(text_tf.getrow(row))
names_similar = pd.Series(indices.flatten()).map(data2.reset_index()['Unique_id'])
result1 = pd.DataFrame({'Distance' : distances.flatten() / 10, 'Unique_id' : names_similar}) # 'Unique_id' : names_similar,
return result1
def get_closest_neighs3(Relevant):
row = data2.index.get_loc(Relevant)
distances, indices = nbrs.kneighbors(text_tf.getrow(row))
names_similar = pd.Series(indices.flatten()).map(data2.reset_index()['Relevant'])
result2 = pd.DataFrame({'distance2' : distances.flatten(), 'Relevant' : names_similar}) # 'Unique_id' : names_similar,
return result2
print("\nPrint KNN1")
print(get_closest_neighs1(114), "\n")
print("\nPrint KNN2")
print(get_closest_neighs2(114), "\n")
print("\nPrint KNN3")
print(get_closest_neighs3(114), "\n")
data3 = pd.DataFrame(get_closest_neighs1(114))
data4 = pd.DataFrame(get_closest_neighs2(114))
data5 = pd.DataFrame(get_closest_neighs3(114))
del data5['distance2']
data6 = pd.concat([data3, data4, data5], axis=1).reindex(data3.index)
del data6['distance1']
data6.to_csv('File_4_coltest_demo_KNN_results.csv', sep=';', encoding="ISO-8859-1", index=False)
If I understand you right you are trying to do this:
You have vectorised all your documents by their "Abstract" field. Therefore documents with abstracts with similar word distributions should be nearby in TFIDF space.
You want to find the nearest neighbours to a document which has the search keyword.
Therefore you'd need to search the original corpus for the first or all documents which have that keyword
then find the index of that/those document(s), and then find their neighbours.
if there are multiple documents with that keyword, you would need to sort the index list and merge the overall results somehow with some weightings.
If this is true, then the keyword search/lookup isn't really "inside" the model, it's just preselecting a document from the corpus. Once you have the document index, you can perform the KNN (repeatedly).
I'm not hugely familiar with Pandas, but I've done this kind of thing "manually" before e.g. by keeping the document titles in a separate array, with a map to the indexes.
I would imagine you would need to replace your data2.index.get_loc() calls with an iteration over the column values for "Label" and do a simple string search on each. Or does Pandas provide search functions within the corpus?
e.g. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html#pandas.DataFrame.query

Vectorizing a for loop with a pandas dataframe

I am trying to do a project for my physics class where we are supposed to simulate motion of charged particles. We are supposed to randomly generate their positions and charges but we have to have positively charged particles in one region and negatively charged ones anywhere else. Right now, as a proof of concept, I am trying to do only 10 particles but the final project will have at least 1000.
My thought process is to create a dataframe with the first column containing the randomly generated charges and run a loop to see what value I get and place in the same dataframe as the next three columns their generated positions.
I have tried to do a simple for loop going over the rows and inputting the data as I go, but I run into an IndexingError: too many indexers. I also want this to run as efficiently as possible so that if I scale up the number of particles, it doesn't slow as much.
I also want to vectorize the operations of calculating the motion of each particle since it is based on position of every other particle which, through normal loops would take a lot of computational time.
Any vectorization optimization or offloading to GPU would be very helpful, thanks.
# In[1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
# In[2]:
num_points=10
df_position = pd.DataFrame(pd,np.empty((num_points,4)),columns=['Charge','X','Y','Z'])
# In[3]:
charge = np.array([np.random.choice(2,num_points)])
df_position.iloc[:,0]=np.where(df_position["Charge"]==0,-1,1)
# In[4]:
def positive():
return np.random.uniform(low=0, high=5)
def negative():
return np.random.uniform(low=5, high=10)
# In[5]:
for row in df_position.itertuples(index=True,name='Charge'):
if(getattr(row,"Charge")==-1):
df_position.iloc[row,1]=positive()
df_position.iloc[row,2]=positive()
df_position.iloc[row,3]=positive()
else:
df_position.iloc[row,1]=negative()
#this is where I would get the IndexingError and would like to optimize this portion
df_position.iloc[row,2]=negative()
df_position.iloc[row,3]=negative()
df_position.iloc[:,0]=np.where(df_position["Charge"]==0,-1,1)
# In[6]:
ax=plt.axes(projection='3d')
ax.set_xlim(0, 10); ax.set_ylim(0, 10); ax.set_zlim(0,10);
xdata=df_position.iloc[:,1]
ydata=df_position.iloc[:,2]
zdata=df_position.iloc[:,3]
chargedata=df_position.iloc[:11,0]
colors = np.where(df_position["Charge"]==1,'r','b')
ax.scatter3D(xdata,ydata,zdata,c=colors,alpha=1)
EDIT:
The dataframe that I want the results in would be something like this
Charge X Y Z
-1
1
-1
-1
1
With the inital coordinates of each charge listed after in their respective columns. It will be a 3D dataframe as I will need to track of all their new positions after each time step so that I can do animations of the motion. Each layer will be exactly the same format.
Some code for creating your dataframe:
import numpy as np
import pandas as pd
num_points = 1_000
# uniform distribution of int, not sure it is the best one for your problem
# positive_point = np.random.randint(0, num_points)
positive_point = int(num_points / 100 * np.random.randn() + num_points / 2)
negavite_point = num_points - positive_point
positive_df = pd.DataFrame(
np.random.uniform(0.0, 5.0, size=[positive_point, 3]), index=[1] * positive_point, columns=['X', 'Y', 'Z']
)
negative_df = pd.DataFrame(
np.random.uniform(5.0, 10.0, size=[negavite_point, 3]), index=[-1] *negavite_point, columns=['X', 'Y', 'Z']
)
df = pd.concat([positive_df, negative_df])
It is quite fast for 1,000 or 1,000,000.
Edit: with my first answer, I totally miss a big part of the question. This new one should fit better.
Second edit: I use a better distribution for the number of positive point than a uniform distribution of int.

Insert field into structured array at a specific column index

I'm currently using np.loadtxt to load some mixed data into a structured numpy array. I do some calculations on a few of the columns to output later. For compatibility reasons I need to maintain a specific output format so I'd like to insert those columns at specific points and use np.savetxt to export the array in one shot.
A simple setup:
import numpy as np
x = np.zeros((2,),dtype=('i4,f4,a10'))
x[:] = [(1,2.,'Hello'),(2,3.,'World')]
newcol = ['abc','def']
For this example I'd like to make newcol the 2nd column. I'm very new to Python (coming from MATLAB). From my searching all I've been able to find so far are ways to append newcol to the end of x to make it the last column, or x to newcol to make it the first column. I also turned up np.insert but it doesn't seem to work on a structured array because it's technically a 1D array (from my understanding).
What's the most efficient way to accomplish this?
EDIT1:
I investigated np.savetxt a little further and it seems like it can't be used with a structured array, so I'm assuming I would need to loop through and write each row with f.write. I could specify each column explicitly (by field name) with that approach and not have to worry about the order in my structured array, but that doesn't seem like a very generic solution.
For the above example my desired output would be:
1, abc, 2.0, Hello
2, def, 3.0, World
This is a way to add a field to the array, at the position you require:
from numpy import zeros, empty
def insert_dtype(x, position, new_dtype, new_column):
if x.dtype.fields is None:
raise ValueError, "`x' must be a structured numpy array"
new_desc = x.dtype.descr
new_desc.insert(position, new_dtype)
y = empty(x.shape, dtype=new_desc)
for name in x.dtype.names:
y[name] = x[name]
y[new_dtype[0]] = new_column
return y
x = zeros((2,), dtype='i4,f4,a10')
x[:] = [(1, 2., 'Hello'), (2, 3., 'World')]
new_dt = ('my_alphabet', '|S3')
new_col = ['abc', 'def']
x = insert_dtype(x, 1, new_dt, new_col)
Now x looks like
array([(1, 'abc', 2.0, 'Hello'), (2, 'def', 3.0, 'World')],
dtype=[('f0', '<i4'), ('my_alphabet', 'S3'), ('f1', '<f4'), ('f2', 'S10')])
The solution is adapted from here.
To print the recarray to file, you could use something like:
from matplotlib.mlab import rec2csv
rec2csv(x,'foo.txt')

Resources