I am manipulating .csv files. I have to loop through each column of numeric data in the file and enter them into different lists. The code I have is the following:
import csv
salto_linea = "\n"
csv_file = "02_CSV_data1.csv"
with open(csv_file, 'r') as csv_doc:
doc_reader = csv.reader(csv_doc, delimiter = ",")
mpg = []
cylinders = []
displacement = []
horsepower = []
weight = []
acceleration = []
year = []
origin = []
lt = [mpg, cylinders, displacement, horsepower,
weight, acceleration, year, origin]
for i,ln in zip(range (0,9),lt):
print(f"{i} -> {ln}")
for row in doc_reader:
y = row[i]
ln.append(y)
In the loop, try to have range() serve me as an index so that in the nested for loop, it loops through the first column (the first element of each row in the csv) and feeds it into the first list of 'lt'. The problem I have is that I go through the data column and enter it, but range() continues to advance in the first loop, ending the nesting, thinking that it would iterate i = 1, and that new value of 'i' would enter again. the nested loop traversing the next column and vice versa. I also tried it with some other while loop to iterate a counter that adds to each iteration and serves as an index but it didn't work either.
How I can fill the sublists in 'lt' with the data which is inside the csv file??
without seing the ontents of the CSV file itself, the best way of reading the data into a table is with the pandas module, which can be done in one line of code.
import pandas as pd
df = pd.read_csv('02_CSV_data1.csv')
this would have read all the data into a dataframe and you can work with this.
Alternatively, ammend the for loop like this:
for row in doc_reader:
for i, ln in enumerate(lt):
ln.append(row[i])
for bigger data, i would prefer pandas which has vectorised methods.
Related
I have the following code,
temp = dict()
for _, row in df_A.iterrows():
if row["anchor"] not in temp: # anchor, id, and name are columns in df_A
temp[row["anchor"]] = [row["id"], row["name"]]
''' Will do the same on df_B, df_B, etc... '''
for index, row in df_Main.iterrows():
if row["anchor"] in temp:
self.df_Main.at[index, "id"] = temp[row["anchor"]][0]
self.df_Main.at[index, "name"] = temp_map[row["anchor"]][1]
But here, df_Main can have more than 1 million rows and df_A, df_B, etc... can have 50,000 to 100,000 entries. In this case, will it be inefficient to use iterrows()?
Also, how can I do the following operations in a single line? I am fairly new to python and I don't know how to achieve my requirement using lambda and apply.
In the first loop, you are looking for the first occurrence of anchor and their corresponding id and nameand save them in a dictionary. You can achieve it by this line:
df = df_A.drop_duplicates('A', keep='first').set_index('A')
The second loop can be optimized like this:
bool_index = df_main['A'].isin(df.index)
values_A = df_main[bool_index]['A']
df_main.loc[bool_index,'B'] = df.loc[values_A,'B'].values
df_main.loc[bool_index,'C'] = df.loc[values_A,'C'].values
I have created and updated a pandas dataframe to fill details of a section of an image and its corresponding features.
slice_sq_dim = 200
df_slice = pd.DataFrame({'Sample': str,
'Slice_ID':int,
'Slice_Array': [np.zeros((slice_sq_dim,slice_sq_dim))],
'Interface_Array': [np.zeros((slice_sq_dim,slice_sq_dim))],
'Slice_Array_Threshold': [np.zeros((slice_sq_dim,slice_sq_dim))]})
I added individual elements of this dataframe by updating the value of each cell through row by row iteration. Once I have completed my dataframe (with around 200 rows), I cannot seem to display more than the first row of its contents. I assume that this is due to the inclusion of multi-dimensional numpy arrays (image slices) as a component. I have also exported this data into a JSON file so that it can act as an input file during the next run. The following code shows how I exactly tried this and also how I fill my dataframe.
Slices_data_file = os.path.join(os.getcwd(), "Slices_dataframe.json")
if os.path.isfile(Slices_data_file):
print("Using the saved data of slices from previous run..")
df_slice = pd.read_json(Slices_data_file, orient='records')
else:
print("No previously saved slice data found..")
no_of_slices = 20
for index, row in df_files.iterrows(): # df_files is the previous dataframe with image path details
path = row['image_path']
slices, slices_thresh, slices_interface = slice_image(path, slice_sq_dim, no_of_slices)
# each of the output is a list of 20 image slices
for n, arr in enumerate(slices):
indx = (indx_row - 1 ) * no_of_slices + n
df_slice.Sample[indx] = path
df_slice.Slice_ID[indx] = n+1
df_slice.Slice_Array[indx] = arr
df_slice.Interface_Array[indx] = slices_interface[n]
df_slice.Slice_Array_Threshold[indx] = slices_thresh[n]
df_slice.to_json(Slices_data_file, orient='records')
I would like to do the following things:
Complete the dataframe with the possibility to add further columns of scalar values
View the dataframe normally with multiple rows and iterate using functions such as df_slice.iterrows() which is currently not supported
Save and reuse the database so as to avoid the repeated and time-consuming operations
Any advice or better suggestions?
After some while of searching, I found some topics that helped. pd.Series was very appropriate here. Also, I think that there was a "SettingwithCopyWarning" thatI chose to ignore somewhere in between. Final code is given below:
Slices_data_file = os.path.join(os.getcwd(), "Slices_dataframe.json")
if os.path.isfile(Slices_data_file):
print("Using the saved data of slices from previous run..)")
df_slice = pd.read_json(Slices_data_file, orient = 'columns')
else:
print("No previously saved slice data found..")
Sample_col = []
Slice_ID_col = []
Slice_Array_col = []
Interface_Array_col = []
Slice_Array_Threshold_col = []
no_of_slices = 20
slice_sq_dim = 200
df_slice = pd.DataFrame({'Sample': str,
'Slice_ID':int,
'Slice_Array': [],
'Interface_Array': [],
'Slice_Array_Threshold': []})
for index, row in df_files.iterrows():
path = row['image_path']
slices, slices_thresh, slices_interface = slice_image(path, slice_sq_dim, no_of_slices)
for n, arr in enumerate(slices):
Sample_col.append(Image_Unique_ID)
Slice_ID_col.append(n+1)
Slice_Array_col.append(arr)
Interface_Array_col.append(slices_interface[n])
Slice_Array_Threshold_col.append(slices_thresh[n])
print("Sicing -> ", Image_Unique_ID, " Complete")
df_slice['Sample'] = pd.Series(Sample_col)
df_slice['Slice_ID'] = pd.Series(Slice_ID_col)
df_slice['Slice_Array'] = pd.Series(Slice_Array_col)
df_slice['Interface_Array'] = pd.Series(Interface_Array_col)
df_slice['Slice_Array_Threshold'] = pd.Series(Slice_Array_Threshold_col)
df_slice.to_json(os.path.join(os.getcwd(), "Slices_dataframe.json"), orient='columns')
(I've edited the first column name in the labels_df for clarity)
I have two DataFrames, train_df and labels_df. train_df has integers that map to attribute names in the labels_df. I would like to look up each number within a given train_df cell and return in the adjacent cell, the corresponding attribute name from the labels_df.
So fore example, the first observation in train_df has attribute_ids of 147, 616 and 813 which map to (in the labels_df) culture::french, tag::dogs, tag::men. And I would like to place those strings inside one cell on the same row as the corresponding integers.
I've tried variations of the function below but fear I am wayyy off:
def my_mapping(df1, df2):
tags = df1['attribute_ids']
for i in tags.iteritems():
df1['new_col'] = df2.iloc[i]
return df1
The data are originally from two csv files:
train.csv
labels.csv
I tried this from #Danny :
sample_train_df['attribute_ids'].apply(lambda x: [sample_labels_df[sample_labels_df['attribute_name'] == i]
['attribute_id_num'] for i in x])
*please note - I am running the above code on samples of each DF due to run times on the original DFs.
which returned:
I hope this is what you are looking for. i am sure there's a much more efficient way using look up.
df['new_col'] = df['attribute_ids'].apply(lambda x: [labels_df[labels_df['attribute_id'] == i]['attribute_name'] for i in x])
This is super ugly and one day, hopefully sooner than later, i'll be able to accomplish this task in an elegant fashion though, until then, this is what got me the result I need.
split train_df['attribute_ids'] into their own cell/column
helper_df = train_df['attribute_ids'].str.split(expand=True)
combine train_df with the helper_df so I have the id column (they are photo id's)
train_df2 = pd.concat([train_df, helper_df], axis=1)
drop the original attribute_ids column
train_df2.drop(columns = 'attribute_ids', inplace=True)
rename the new columns
train_df2.rename(columns = {0:'attr1', 1:'attr2', 2:'attr3', 3:'attr4', 4:'attr5', 5:'attr6',
6:'attr7', 7:'attr8', 8:'attr9', 9:'attr10', 10:'attr11'})
convert the labels_df into a dictionary
def create_file_mapping(df):
mapping = dict()
for i in range(len(df)):
name, tags = df['attribute_id_num'][i], df['attribute_name'][i]
mapping[str(name)] = tags
return mapping
map and replace the tag numbers with their corresponding tag names
train_df3 = train_df2.applymap(lambda s: my_map.get(s) if s in my_map else s)
create a new column of the observations tags in a list of concatenated values
helper1['new_col'] = helper1[helper1.columns[0:10]].apply(lambda x: ','.join(x.astype(str)), axis = 1)
I'm trying to only import part of a CSV file to a list.
In short, the CSV I recveive contains two columns [depth and speed]. Depth always starts at zero, gets larger and then back to zero again.
I would like to add the first part of the CSV to the list (depth 0-13+). I then want to add the second part of the CSV (13-0) to another list.
I assume a for loop would be the way to go, but I don't know how to check each row for ascending/descending numbers.
pullData = open("svp3.csv","r").read()
dataArray = pullData.split('\n')
depthArrayY = []
speedArrayX = []
depthArrayLength = len(depthArrayY)
for eachLine in dataArray:
if len(eachLine)>1:
x,y = eachLine.split(',')
speedArrayX.append(round(float(x), 2))
depthArrayY.append(round(float(y), 2))
I'd suggest using Pandas, I think it will allow you for much more when you need to deal with imported data.
import pandas as pd
df = pd.read_csv('svp3.csv')
tmp = df[df.depth <= df.depth.shift(-1)].values
depth_increase = tmp[:,0]
speed_while_depth_increase = tmp[:,1]
tmp = df[df.depth > df.depth.shift(-1)].values
depth_decrease = tmp[:,0]
speed_while_depth_decrease = tmp[:,1]
I assumed that your CSV has the first the depth column then the speed column.
Depth column had values from 0 to a certain max value say 14, then from 13 to 0 depth column->[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,13,12,11,10,9,8,7,6,5,4,3,2,1]
and I populated speed column with some random values.
The following code makes use of pandas library and splits the column of depth into 2 lists of ascending and descending values using a simple logic of storing the current max value to determine when the ascending part of the column ends.
import pandas as pd
data = pd.read_csv('svp3.csv')
max_val = -10000
depthArrayAscendingY = []
speedArrayX = []
depthArrayDescendingY = []
for a in data.values:
if a[0]>max_val:
depthArrayAscendingY.append(a[0])
speedArrayX.append(a[1])
max_val = a[0]
else:
depthArrayDescendingY.append(a[0])
speedArrayX.append(a[1])
The answer to this question by Baleato is more efficient and cleaner than this answer, you should definitely check their answer.
I have some data 33k rows x 57 columns.
In some columns there is a data which I want to translate with dictionary.
I have done translation, but now I want to write back translated data to my data set.
I have problem with saving tuples output from for loop.
I am using tuples for creating good translation. .join and .append is not working in my case. I was trying in many case but without any success.
Looking for any advice.
data = pd.read_csv(filepath, engine="python", sep=";", keep_default_na=False)
for index, row in data.iterrows():
row["translated"] = (tuple(slownik.get(znak) for znak in row["1st_service"]))
I just want to see in print(data["1st_service"] a translated data not the previous one before for loop.
First of all, if your csv doesn't already have a 'translated' column, you'll have to add it:
import numpy as np
data['translated'] = np.nan
The problem is the row object you're trying to write to is only a view of the dataframe, it's not the dataframe itself. Plus you're missing square brackets for your list comprehension, if I'm understanding what you're doing. So change your last line to:
data.loc[index, "translated"] = tuple([slownik.get(znak) for znak in row["1st_service"]])
and you'll get a tuple written into that one cell.
In future, posting the exact error message you're getting is very helpful!
I have manage it, below working code:
data = pd.read_csv(filepath, engine="python", sep=";", keep_default_na=False)
data.columns = []
slownik = dict([ ])
trans = ' '
for index, row in data.iterrows():
trans += str(tuple([slownik.get(znak) for znak in row["1st_service"]]))
data['1st_service'] = trans.split(')(')
data.to_csv("out.csv", index=False)
Can you tell me if it is well done?
Maybe there is an faster way to do it?
I am doing it for 12 columns in one for loop, as shown up.