reading data with varying length header - python-3.x

I want to read in python a file which contains a varying length header and then extract in a dataframe/series the variables which are coming after the header.
The data looks like :
....................................................................
Data coverage and measurement duty cycle:
When the instrument duty cycle is not in measure mode (i.e. in-flight
calibrations) the data is not given here (error flag = 2).
The measurements have been found to exhibit a strong sensitivity to cabin
pressure.
Consequently the instrument requires calibrated at each new cabin
pressure/altitude.
Data taken at cabin pressures for which no calibration was performed is
not given here (error flag = 2).
Measurement sensivity to large roll angles was also observed.
Data corresponding to roll angles greater than 10 degrees is not given
here (error flag = 2)
......................................................................
High Std: TBD ppb
Target Std: TBD ppb
Zero Std: 0 ppb
Mole fraction error flag description :
0 : Valid data
2 : Missing data
31636 0.69 0
31637 0.66 0
31638 0.62 0
31639 0.64 0
31640 0.71 0
.....
.....
So what I want is to extract the data as :
Time C2H6 Flag
0 31636 0.69 0 NaN
1 31637 0.66 0 NaN
2 31638 0.62 0 NaN
3 31639 0.64 0 NaN
4 31640 0.71 0 NaN
5 31641 0.79 0 NaN
6 31642 0.85 0 NaN
7 31643 0.81 0 NaN
8 31644 0.79 0 NaN
9 31645 0.85 0 NaN
I can do that with
infile="/nfs/potts.jasmin-north/scratch/earic/AEOG/data/mantildas_faam_20180911_r1_c118.na"
flightdata = pd.read_fwf(infile, skiprows=53, header=None, names=['Time', 'C2H6', 'Flag'],)
but I m skipping approximately 53 rows because I counted how much I should skip. I have a bunch of these files and some don't have exactly 53 rows in the header so I was wondering what would be the best way to deal with this and a criteria to have Python always only read the three columns of data when finds them? I thought if I'd want let's say Python to actually read the data from where encounters
Mole fraction error flag description :
0 : Valid data
2 : Missing data
what should I do ? What about another criteria to use which would work better ?

You can split on the header delimiter, like so:
with open(filename, 'r') as f:
myfile = f.read()
infile = myfile.split('Mole fraction error flag description :')[-1]
# skip lines with missing data
infile = infile.split('\n')
# likely a better indicator of a line with incorrect format, you know the data better
infile = '\n'.join([line for line in infile if ' : ' not in line])
# create dataframe
flightdata = pd.read_fwf(infile, header=None, names=['Time', 'C2H6', 'Flag'],)

Related

How to get the column name of a dataframe from values in a numpy array

I have a df with 15 columns:
df.columns:
0 class
1 name
2 location
3 income
4 edu_level
--
14 marital_status
after some transformations I got an numpy.ndarray with shape (15,3) named loads:
0.52 0.33 0.09
0.20 0.53 0.23
0.60 0.28 0.23
0.13 0.45 0.41
0.49 0.9
so on so on so on
So, 3 columns with 15 values.
What I need to do:
I want to get the df column name of the values from the first column of loads that are greater then .50
For this example, the columns of df related to the first column of loadswith values higher than 0.5 should return:
0 Class
2 Location
Same for the second column of loads, should return:
1 name
3 income
4 edu_level
and the same logic to the 3rd column of loads.
I managed to get the numparray loads they way I need it but I am having a bad time with this last part. I know I can simple manually pick the columns but this will be a hard task when df has more than 15 features.
Can anyone help me, please?
given your threshold you can create a boolean array in order to filter df.columns:
threshold = .5
for j in range(loads.shape[1]):
print(df.columms[loads[:,j]>threshold])

Pandas Dataframe: Dropping Selected rows with 0.0 float type values

Please I have a dataset that contains amount as float type. Some of the rows contain values of 0.00 and because they skew the dataset, I need to drop them. I have temporarily set the "Amount" to index and sorted the value as well.
Afterwards, I attempted to drop the rows after subsetting with iloc but eep getting error message in the form ValueError: Buffer has wrong number of dimensions (expected 1, got 3)
'''mortgage = mortgage.set_index('Gross Loan Amount').sort_values('Gross Loan Amount')
mortgage.drop([mortgage.loc[0.0]])'''
I equally tried this:
'''mortgage.drop(mortgage.loc[0.0])'''
it flagged the error of the form KeyError: "[Column_names] not found in axis"
Please how else can I accomplish the task?
You could make a boolean frame and then use any
df = df[~(df == 0).any(axis=1)]
in this code, all rows that have at least one zero in their data has been removed
Let me see if I get your problem. I created this sample dataset:
df = pd.DataFrame({'Values': [200.04,100.00,0.00,150.15,69.98,0.10,2.90,34.6,12.6,0.00,0.00]})
df
Values
0 200.04
1 100.00
2 0.00
3 150.15
4 69.98
5 0.10
6 2.90
7 34.60
8 12.60
9 0.00
10 0.00
Now, in order to get rid of the 0.00 values, you just have to do this:
df = df[df['Values'] != 0.00]
Output:
df
Values
0 200.04
1 100.00
3 150.15
4 69.98
5 0.10
6 2.90
7 34.60
8 12.60

Sorting data from a large text file and convert them into an array

I have a text file that contain some data.
#this is a sample file
# data can be used for practice
total number = 5
t=1
dx= 10 10
dy= 10 10
dz= 10 10
1 0.1 0.2 0.3
2 0.3 0.4 0.1
3 0.5 0.6 0.9
4 0.9 0.7 0.6
5 0.4 0.2 0.1
t=2
dx= 10 10
dy= 10 10
dz= 10 10
1 0.11 0.25 0.32
2 0.31 0.44 0.12
3 0.51 0.63 0.92
4 0.92 0.72 0.63
5 0.43 0.21 0.14
t=3
dx= 10 10
dy= 10 10
dz= 10 10
1 0.21 0.15 0.32
2 0.41 0.34 0.12
3 0.21 0.43 0.92
4 0.12 0.62 0.63
5 0.33 0.51 0.14
My aim is to read the file, find out the row where column value is 1 and 5 and store them as multidimensional array. like for 1 it will be a1=[[0.1, 0.2, 0.3],[0.11, 0.25, 0.32],[0.21, 0.15, 0.32]] and for 5 it will be a5=[[0.4, 0.2, 0.1],[0.43, 0.21, 0.14],[0.33, 0.51, 0.14]].
Here is my code that I have written,
import numpy as np
with open("position.txt","r") as data:
lines = data.read().split(sep='\n')
a1 = []
a5 = []
for line in lines:
if(line.startswith('1')):
a1.append(list(map(float, line.split()[1:])))
elif (line.startswith('5')):
a5.append(list(map(float, line.split()[1:])))
a1=np.array(a1)
a5=np.array(a5)
My code is working perfectly with my sample file that I have uploaded but in real case my file is quite larger (2gb). Handling that with my code raise memory error. How can I solve this issue? I have 96GB in my workstation.
There are several things to improve:
Don't attempt to load the entire text file in memory (that will save 2 GB).
Use numpy arrays, not lists, for storing numerical data.
Use single-precision floats rather than double-precision.
So, you need to estimate how big your array will be. It looks like there may be 16 million records for 2 GB of input data. With 32-bit floats, you need 16e6*2*4=128 MB of memory. For a 500 GB input, it will fit in 33 GB memory (assuming you have the same 120-byte record size).
import numpy as np
nmax = int(20e+6) # take a bit of safety margin
a1 = np.zeros((nmax, 3), dtype=np.float32)
a5 = np.zeros((nmax, 3), dtype=np.float32)
n1 = n5 = 0
with open("position.txt","r") as data:
for line in data:
if '0' <= line[0] <= '9':
values = np.fromstring(line, dtype=np.float32, sep=' ')
if values[0] == 1:
a1[n1] = values[1:]
n1 += 1
elif values[0] == 5:
a5[n5] = values[1:]
n5 += 1
# trim (no memory is released)
a1 = a1[:n1]
a5 = a5[:n5]
Note that float equalities (==) are generally not recommended, but in the case of value[0]==1, we know that it's a small integer, for which float representations are exact.
If you want to economize on memory (for example if you want to run several python processes in parallel), then you could initialize the arrays as disk-mapped arrays, like this:
a1 = np.memmap('data_1.bin', dtype=np.float32, mode='w+', shape=(nmax, 3))
a5 = np.memmap('data_5.bin', dtype=np.float32, mode='w+', shape=(nmax, 3))
With memmap, the files won't contain any metadata on data type and array shape (or human-readable descriptions). I'd recommend that you convert the data to npz format in a separate job; don't run these jobs in parallel because they will load the entire array in memory.
n = 3
a1m = np.memmap('data_1.bin', dtype=np.float32, shape=(n, 3))
a5m = np.memmap('data_5.bin', dtype=np.float32, shape=(n, 3))
np.savez('data.npz', a1=a1m, a5=a5m, info='This is test data from SO')
You can load them like this:
data = np.load('data.npz')
a1 = data['a1']
Depending on the balance between cost of disk space, processing time, and memory, you could compress the data.
import zlib
zlib.Z_DEFAULT_COMPRESSION = 3 # faster for lower values
np.savez_compressed('data.npz', a1=a1m, a5=a5m, info='...')
If float32 has more precision than you need, you could truncate the binary representation for better compression.
If you like memory-mapped files, you can save in npy format:
np.save('data_1.npy', a1m)
a1 = np.load('data_1.npy', mmap_mode='r+')
But then you can't use compression and you'll end up with many metadata-less files (except array size and datatype).

Efficient way to create a large matrix in python from a table of interactions

I have a csv file in the following structure, indicating "interactions".
I need to convert this to a standard square matrix format so that I can use some other functions written for graphs (with igraph).
CSV file I would like to convert has ~106M rows in the following format
node1 node2 interaction strength
XYZ ABC 0.74
XYZ TAH 0.24
XYZ ABA 0.3
ABC TAH 0.42
... (node names are made up to show there is no pattern except node1 is ordered)
and standard format I would like to have this data has about 16K rows and 16K columns as follows:
XYZ ABC ABA TAH ...
XYZ 0 0.74 0.3 0
ABC 0.74 0 0 0.42
ABA 0.3 0 0 0
TAH 0 0.42 0 0
.
.
.
I do not necessarily need to have a dataframe in the end but I need to have row and column names noted in same order and save this final matrix as csv to somewhere.
What I tried is:
import pandas as pd
import progressbar
def list_uniqify(seq, idfun=None):
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
data = pd.read_csv('./pipelines/cache/fr2/summa_fr2.csv', index_col=0)
names_ordered = helper.list_uniqify( data.iloc[:, 0].tolist() + data.iloc[:, 1].tolist() )
adj = pd.DataFrame(0, index=names_ordered, columns=names_ordered)
bar = progressbar.ProgressBar(maxval=data.shape[0]+1,
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.update(0)
bar.start()
print("Preparing output...")
for i in range(data.shape[0]):
bar.update(i)
adj.loc[data.iloc[i, 0], data.iloc[i, 1]] = data.iloc[i, 2]
adj.loc[data.iloc[i, 1], data.iloc[i, 0]] = data.iloc[i, 2]
bar.finish()
print("Saving output...")
adj.to_csv("./data2_fr2.csv")
About 20-30 minutes in and I just got 1%, which means this would take about 2 days which is too long.
Is there anything can I do to fasten this process?
Note: I could parallelize this (8 cores, 15GB RAM, ~130GB SWAP)
but single core operation takes 15GB RAM, ~15GB SWAP already. I am not sure if this is a good idea or not. As no two processes would write on the same cell of dataframe, I wouldn't need to correct for the racing condition right?
Edit: Below are speed tests for suggested functions, they are amazingly better than implemented loop (that took ~34 seconds for 50K...)
speeds in seconds for 250K, 500K, 1M rows:
pivot_table: 0.029901999999999873, 0.031084000000000334, 0.0320750000000003
crosstab: 0.023093999999999948, 0.021742999999999846, 0.021409000000000233
Look at using pd.crosstab:
pd.crosstab(df['node1'],df['node2'],df['interaction'],aggfunc='first').fillna(0)
Output:
node2 ABA ABC TAH
node1
ABC 0.0 0.00 0.42
XYZ 0.3 0.74 0.24
I think you just need .pivot_table and then to reindex the columns (and rows which get changed), filling missing values with 0.
import pandas as pd
df2 = (pd.pivot_table(df, index='node1', columns='node2', values='interaction_strength')
.reindex(df.node1.drop_duplicates())
.reindex(df.node1.drop_duplicates(), axis=1)
.fillna(0))
df2.index.name=None
df2.columns.name=None
Output:
XYZ ABC
XYZ 0.0 0.74
ABC 0.0 0.00

Effective reasonable indexing for numeric vector search?

I have a long numeric table where 7 columns are a key and 4 columns is a value to find.
Actually I have rendered an object with different distances and perspective angles and have calculated Hu moments for it's contour. But this is not important to the question, just a sample to imagine.
So, when I have 7 values, I need to scan a table, find closest values in that 7 columns and extract corresponding 4 values.
So, the task aspects to consider is follows:
1) numbers have errors
2) the scale in function domain is not the same as the scale in function value; i.e. the "distance" from point in 7-dimensional space should depend on that 4 values, how it affect
3) search should be fast
So the question is follows: isn't some algorithm out there to solve this task efficiently, i.e. perform some indexing on that 7 columns, but do this no like conventional databases do, but taking into account point above.
If I understand the problem correctly, you might consider using scipy.cluster.vq (vector quantization):
Suppose your 7 numeric columns look like this (let's call the array code_book):
import scipy.cluster.vq as vq
import scipy.spatial as spatial
import numpy as np
np.random.seed(2013)
np.set_printoptions(precision=2)
code_book = np.random.random((3,7))
print(code_book)
# [[ 0.68 0.96 0.27 0.6 0.63 0.24 0.7 ]
# [ 0.84 0.6 0.59 0.87 0.7 0.08 0.33]
# [ 0.08 0.17 0.67 0.43 0.52 0.79 0.11]]
Suppose the associated 4 columns of values looks like this:
values = np.arange(12).reshape(3,4)
print(values)
# [[ 0 1 2 3]
# [ 4 5 6 7]
# [ 8 9 10 11]]
And finally, suppose we have some "observations" of 7-column values like this:
observations = np.random.random((5,7))
print(observations)
# [[ 0.49 0.39 0.41 0.49 0.9 0.89 0.1 ]
# [ 0.27 0.96 0.16 0.17 0.72 0.43 0.64]
# [ 0.93 0.54 0.99 0.62 0.63 0.81 0.36]
# [ 0.17 0.45 0.84 0.02 0.95 0.51 0.26]
# [ 0.51 0.8 0.2 0.9 0.41 0.34 0.36]]
To find the 7-valued row in code_book which is closest to each observation, you could use vq.vq:
index, dist = vq.vq(observations, code_book)
print(index)
# [2 0 1 2 0]
The index values refer to rows in code_book. However, if the rows in values are ordered the same way as code_book, we can "lookup" the associated value with values[index]:
print(values[index])
# [[ 8 9 10 11]
# [ 0 1 2 3]
# [ 4 5 6 7]
# [ 8 9 10 11]
# [ 0 1 2 3]]
The above assumes you have all your observations arranged in an array. Thus, to find all the indices you need only one call to vq.vq.
However, if you obtain the observations one at a time and need to find the closest row in code_book before going on to the next observation, then it would be inefficient to call vq.vq each time. Instead, generate a KDTree once, and then find the nearest neighbor(s) in the tree:
tree = spatial.KDTree(code_book)
for observation in observations:
distances, indices = tree.query(observation)
print(indices)
# 2
# 0
# 1
# 2
# 0
Note that the number of points in your code_book (N) must be large compared to the dimension of the data (e.g. N >> 2**7) for the KDTree to be fast compared to simple exhaustive search.
Using vq.vq or KDTree.query may or may not be faster than exhaustive search, depending on the size of your data (code_book and observations). To find out which is faster, be sure to benchmark these versus an exhaustive search using timeit.
i don't know if i understood well your question,but i will try giving an answer.
for each row K in the table compute the distance of your key from the key in that row:
( (X1-K1)^2 + (X2-K2)^2 + (X3-K3)^2 + (X4-K4)^2 + (X5-K5)^2 + (X6-K6)^2 + (X7-K7)^2 )^0.5
where {X1,X2,X3,X4,X5,X6,X7} is the key and {K1,K2,K3,K4,K5,K6,K7}is the key at row K
you could make one factor of the key more or less relevant of the others multiplying it while computing distance,for example you could replace (X1-K1)^2 in the formula above with 5*(X1-K1)^2 to make that more influent.
and store in a variable the distance ,in a second variable the row number
do the same with the following rows and if the new distance is lower then the one you stored then replace the distance and the row number.
when you have checked all the rows in your table the second variable you have used will show you the nearest row to the key
here is some pseudo-code:
int Row= 0
float Key[7] #suppose it is already filled with some values
float ClosestDistance= +infinity
int ClosestRow= 0
while Row<NumberOfRows{
NewDistance= Distance(Key,Table[Row][0:7])#suppose Distance is a function that outputs the distance and Table is the table you want to control Table[Row= NumberOfRows][Column= 7+4]
if NewDistance<ClosestDistance{
ClosestDistance= NewDistance
ClosestRow= Row}
increase row by 1}
ValueFound= Table[ClosestRow][7:11]#this should be the value you were looking for
i know it isn't fast but it is the best i could do,hope it helped.
P.S. i haven't considered measurement errors,i know.

Resources