I have over 50 csv files to process, but each dataset like this:
[a simplified csv file example] (https://i.stack.imgur.com/yoGo9.png)
There are three columns: Times, Index, Voltage
I want to interpolate time when total voltage decreases [here is (84-69) = 15] reaches 53% [ it means 15*0.53] at index 2.
I will repeat this process for index 4, too. May I ask what I should do?
I am a beginner for python and try this following script:
source code (
import pandas as pd
import glob
import numpy as np
import os
import matplotlib.pyplot as plt
import xlwings as xw
path = os.getcwd()
csv_files = glob.glob(os.path.join(path, "*.csv")) # read all csv files
for f in csv_files: # process each dataset
df = pd.read_csv(f)
ta95 = df[df.Step_Index == 2] # create new dataframe based on index
a=ta95.iloc[1] # choose first row
b=a.loc[:, "Voltage(V)"] # save voltage in first row
c=ta95.iloc[-1] # choose last row
d=c.loc[:, "Voltage(V)"] # save voltage in last row
e=(b-d)*0.53 # get 53% decrease voltage
)
I don't know what should I do next for this script.
I appreciate your time and support if you can offer the help.
If you have any recommendation websites for me to read and help me solve this kind of problem. I do appreciate it, too. Thanks again.
Related
data set imagePlease use python language. I'm a beginner in frequent data mining systems. I'm trying to understand. Be simple and detailed as much as possible please
I tried using the for loop to collect data from a range but I'm still learning so I don't know how to implement it (keeps giving me the error "index 1 is out of bounds for axis 1 with size 1"). Please guide me.
NB: I was trying to construct a data frame but I don't know how to. Help me with that too
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import csv
# Calling DataFrame constructor
Data = pd.read_csv('retail.txt', header = None)
# Intializing the list
transacts = []
# populating a list of transactions
for i in range(1, 9):
transacts.append([str(Data.values[i,j]) for j in range(1, 2000)])
df = pd.DataFrame()
I want to read a CSV File (filled in by temperature sensors) by python3.
Reading CSV File into array works fine. Printing a single cell by index fails. Please help for the right line of code.
This is the code.
import sys
import pandas as pd
import numpy as np
import array as ar
#Reading the CSV File
# date;seconds;Time;Aussen-Temp;Ruecklauf;Kessel;Vorlauf;Diff
# 20211019;0;20211019;12,9;24;22,1;24,8;0,800000000000001
# ...
# ... (2800 rows in total)
np = pd.read_csv('/var/log/LM92Temperature_U-20211019.csv',
header=0,
sep=";",
usecols=['date','seconds','Time','Aussen- Temp','Ruecklauf','Kessel','Vorlauf','Diff'])
br = np # works fine
print (br) # works fine - prints whole CSV Table :-) !
#-----------------------------------------------------------
# Now I want to print the element [2] [3] of the two dimensional "CSV" array ... How to manage that ?
print (br [2] [3]) # ... ends up with an error ...
# what is the correct coding needed now, please?
Thanks in advance & Regards
Give the name of the column, not the index:
print(br['Time'][3])
As an aside, you can read your data with only the following, and you may want decimal=',' as well:
import pandas as pd
br = pd.read_csv('/var/log/LM92Temperature_U-20211019.csv', sep=';', decimal=',')
print(br)
I am trying to read 6 files into 7 different data frames but I am unable to figure out how should I do that. File names can be complete random, that is I know the files but it is not like data1.csv data2.csv.
I tried using something like this:
import sys
import os
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
f1='Norway.csv'
f='Canada.csv'
f='Chile.csv'
Norway = pd.read_csv(Norway.csv)
Canada = pd.read_csv(Canada.csv)
Chile = pd.read_csv(Chile.csv )
I need to read multiple files in different dataframes. it is working fine when I do with One file like
file='Norway.csv
Norway = pd.read_csv(file)
And I am getting error :
NameError: name 'norway' is not defined
You can read all the .csv file into one single dataframe.
for file_ in all_files:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
# concatenate all dfs into one
big_df = pd.concat(dfs, ignore_index=True)
and then split the large dataframe into multiple (in your case 7). For example, -
import numpy as np
num_chunks = 3
df1,df2,df3 = np.array_split(big_df,num_chunks)
Hope this helps.
After googling for a while looking for an answer, I decided to combine answers from different questions into a solution to this question. This solution will not work for all possible cases. You have to tweak it to meet all your cases.
check out the solution to this question
# import libraries
import pandas as pd
import numpy as np
import glob
import os
# Declare a function for extracting a string between two characters
def find_between( s, first, last ):
try:
start = s.index( first ) + len( first )
end = s.index( last, start )
return s[start:end]
except ValueError:
return ""
path = '/path/to/folder/containing/your/data/sets' # use your path
all_files = glob.glob(path + "/*.csv")
list_of_dfs = [pd.read_csv(filename, encoding = "ISO-8859-1") for filename in all_files]
list_of_filenames = [find_between(filename, 'sets/', '.csv') for filename in all_files] # sets is the last word in your path
# Create a dictionary with table names as the keys and data frames as the values
dfnames_and_dfvalues = dict(zip(list_of_filenames, list_of_dfs))
I have a daily time series data for almost 2 years for cluster available space (in GB). I am trying to to use facebook's prophet to do future forecasts. Some forecasts have negative values. Since negative values does not make sense I saw that using carrying capacity for logistic growth model helps in eliminating negative forecasts with cap values. I am not sure if this is applicable for this case and how to get the cap value for my time series. Please help as I am new to this and confused. I am using Python 3.6
import numpy as np
import pandas as pd
import xlrd
import openpyxl
from pandas import datetime
import csv
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from fbprophet import Prophet
import os
import sys
import signal
df = pd.read_excel("Data_Per_day.xlsx")
df1=df.filter(['cluster_guid','date','avail_capacity'],axis=1)
uniquevalues = np.unique(df1[['cluster_guid']].values)
for id in uniquevalues:
newdf = df1[df1['cluster_guid'] == id]
newdf1=newdf.groupby(['cluster_guid','date'],as_index=False['avail_capacity'].sum()
#newdf11=newdf.groupby(['cluster_guid','date'],as_index=False)['total_capacity'].sum()
#cap[id]=newdf11['total_capacity'].max()
#print(cap[id])
newdf1.set_index('cluster_guid', inplace=True)
newdf1.to_csv('my_csv.csv', mode='a',header=None)
with open('my_csv.csv',newline='') as f:
r = csv.reader(f)
data = [line for line in r]
with open('my_csv.csv','w',newline='') as f:
w = csv.writer(f)
w.writerow(['cluster_guid','DATE_TAKEN','avail_capacity'])
w.writerows(data)
in_df = pd.read_csv('my_csv.csv', parse_dates=True, index_col='DATE_TAKEN' )
in_df.to_csv('my_csv.csv')
dfs= pd.read_csv('my_csv.csv')
uni=dfs.cluster_guid.unique()
while True:
try:
print(" Press Ctrl +C to exit or enter the cluster guid to be forcasted")
i=input('Please enter the cluster guid')
if i not in uni:
print( 'Please enter a valid cluster guid')
continue
else:
dfs1=dfs.loc[df['cluster_guid'] == i]
dfs1.drop('cluster_guid', axis=1, inplace=True)
dfs1.to_csv('dataframe'+i+'.csv', index=False)
dfs2=pd.read_csv('dataframe'+i+'.csv')
dfs2['DATE_TAKEN'] = pd.DatetimeIndex(dfs2['DATE_TAKEN'])
dfs2 = dfs2.rename(columns={'DATE_TAKEN': 'ds','avail_capacity': 'y'})
my_model = Prophet(interval_width=0.99)
my_model.fit(dfs2)
future_dates = my_model.make_future_dataframe(periods=30, freq='D')
forecast = my_model.predict(future_dates)
print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']])
my_model.plot(forecast,uncertainty=True)
my_model.plot_components(forecast)
plt.show()
os.remove('dataframe'+i+'.csv')
os.remove('my_csv.csv')
except KeyboardInterrupt:
try:
os.remove('my_csv.csv')
except OSError:
pass
sys.exit(0)
Box-Cox transform of order 0 get the trick done. Here are the steps:
1. Add 1 to each values (so as to avoid log(0))
2. Take natural log of each value
3. Make forecasts
4. Take exponent and subtract 1
This way you will not get negative forecasts. Also log have a nice property of converting multiplicative seasonality to additive form.
I'm trying to write some very simple code, and essentially what I am doing is reading in two columns of time-series data (that correspond to slightly different time-bins), and looping through different % weights of the second column of data with itself in consecutive time bins. However, when I run this loop, for some reason the original dataframe (specifically, df['EST'] is somehow being changed by this line:
X_new[j-1]=Wt*X_temp[j-1]+(1-Wt)*X_temp[j]
I narrowed it down to this line of code, because when I eliminate it, it no longer makes changes to the initial dataframe. I don't understand how this line could be changing the original dataframe.
My complete code:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import csv
import os
os.chdir("C://Users/XXX")
raw_data = open('correlation test.csv')
%matplotlib inline
import matplotlib.pyplot as plt
df=pd.read_csv(raw_data)
Y=df['%<30'][1:].reshape(-1,1)
X_new=df['EST'][1:].reshape(-1,1)
X_temp=df['EST'][1:].reshape(-1,1)
Wt=0
Best_Wt=Wt
Best_Score=1
for i in range(1,100):
for j in range(1,df.shape[0]-1):
X_new[j-1]=Wt*X_temp[j-1]+(1-Wt)*X_temp[j]
asdf=0
RR=LinearRegression()
RR.fit(X_new,Y)
New_Score=np.mean(np.abs((RR.predict(X_new)-Y)))
if New_Score<Best_Score:
Best_Score=New_Score
Best_Wt=Wt
print('New Best Score:',Best_Score)
print('New Best Weight:',Best_Wt)
Wt=Wt+0.01
The file that it pulls from is two columns of percentages, the first column is labeled '%<30' and the second column is labeled 'EST'
Thank you in advance for your help!