Making a Timeseries plot in Python, but want to skip few months - python-3.x

I have a timeseries data of ice thickness. The plot is only useful for winter months and there is no interest in seeing big gaps during summer months. Would it be possible to skip summer months (say April to October) in the x-axis and have a smaller area with different color and labeled Summer?

Let's take this data:
import datetime
n_samples = 20
index = pd.date_range(start='1/1/2018', periods=n_samples, freq='M')
values = np.random.randint(0,100, size=(n_samples))
data = pd.Series(values, index=index)
print(data)
2018-01-31 58
2018-02-28 93
2018-03-31 15
2018-04-30 87
2018-05-31 51
2018-06-30 67
2018-07-31 22
2018-08-31 66
2018-09-30 55
2018-10-31 73
2018-11-30 70
2018-12-31 61
2019-01-31 95
2019-02-28 97
2019-03-31 31
2019-04-30 50
2019-05-31 75
2019-06-30 80
2019-07-31 84
2019-08-31 19
Freq: M, dtype: int64
You can filter the data that is not in the range of the months, so you take the index of Serie, take the month, check if is in the range, and take the negative (with ~)
filtered1 = data[~data.index.month.isin(range(4,10))]
print(filtered1)
2018-01-31 58
2018-02-28 93
2018-03-31 15
2018-10-31 73
2018-11-30 70
2018-12-31 61
2019-01-31 95
2019-02-28 97
2019-03-31 31
If you plot that,
filtered1.plot()
you will have this image
so you need to set the frecuency, in this case, monthly (M)
filtered1.asfreq('M').plot()
Additionally, you could use filters like:
filtered2 = data[data.index.month.isin([1,2,3,11,12])]
filtered3 = data[~ data.index.month.isin([4,5,6,7,8,9,10])]
if you need keep/filter specific months.

Related

Create new column in data frame by interpolating other column in between a particular date range - Pandas

I have a df as shown below.
the data is like this.
Date y
0 2020-06-14 127
1 2020-06-15 216
2 2020-06-16 4
3 2020-06-17 90
4 2020-06-18 82
5 2020-06-19 70
6 2020-06-20 59
7 2020-06-21 48
8 2020-06-22 23
9 2020-06-23 25
10 2020-06-24 24
11 2020-06-25 22
12 2020-06-26 19
13 2020-06-27 10
14 2020-06-28 18
15 2020-06-29 157
16 2020-06-30 16
17 2020-07-01 14
18 2020-07-02 343
The code to create the data frame.
# Create a dummy dataframe
import pandas as pd
import numpy as np
y0 = [127,216,4,90, 82,70,59,48,23,25,24,22,19,10,18,157,16,14,343]
def initial_forecast(data):
data['y'] = y0
return data
# Initial date dataframe
df_dummy = pd.DataFrame({'Date': pd.date_range('2020-06-14', periods=19, freq='1D')})
# Dates
start_date = df_dummy.Date.iloc[1]
print(start_date)
end_date = df_dummy.Date.iloc[17]
print(end_date)
# Adding y0 in the dataframe
df_dummy = initial_forecast(df_dummy)
df_dummy
From the above I would like to interpolate the data for a particular date range.
I would like to interpolate(linear) between 2020-06-17 to 2020-06-27.
ie from 2020-06-17 to 2020-06-27 'y' values changes from 90 to 10 in 10 steps. so at an average in each step it reduces 8.
ie (90-10)/10(number of steps) = 8 in each steps
The expected output:
Date y y_new
0 2020-06-14 127 127
1 2020-06-15 216 216
2 2020-06-16 4 4
3 2020-06-17 90 90
4 2020-06-18 82 82
5 2020-06-19 70 74
6 2020-06-20 59 66
7 2020-06-21 48 58
8 2020-06-22 23 50
9 2020-06-23 25 42
10 2020-06-24 24 34
11 2020-06-25 22 26
12 2020-06-26 19 18
13 2020-06-27 10 10
14 2020-06-28 18 18
15 2020-06-29 157 157
16 2020-06-30 16 16
17 2020-07-01 14 14
18 2020-07-02 343 343
Note: In the remaining date range y_new value should be same as y value.
I tried below code, that is not giving desired output
# Function
def df_interpolate(df, start_date, end_date):
df["Date"]=pd.to_datetime(df["Date"])
df.loc[(df['Date'] >= start_date) & (df['Date'] <= end_date), 'y_new'] = np.nan
df['y_new'] = df['y'].interpolate().round()
return df
df1 = df_interpolate(df_dummy, '2020-06-17', '2020-06-27')
With some tweaks to your function it works. np.where to create the new column, removing the = from your conditionals, and casting to int as per your expected output.
def df_interpolate(df, start_date, end_date):
df["Date"] = pd.to_datetime(df["Date"])
df['y_new'] = np.where((df['Date'] > start_date) & (df['Date'] < end_date), np.nan, df['y'])
df['y_new'] = df['y_new'].interpolate().round().astype(int)
return df
Date y y_new
0 2020-06-14 127 127
1 2020-06-15 216 216
2 2020-06-16 4 4
3 2020-06-17 90 90
4 2020-06-18 82 82
5 2020-06-19 70 74
6 2020-06-20 59 66
7 2020-06-21 48 58
8 2020-06-22 23 50
9 2020-06-23 25 42
10 2020-06-24 24 34
11 2020-06-25 22 26
12 2020-06-26 19 18
13 2020-06-27 10 10
14 2020-06-28 18 18
15 2020-06-29 157 157
16 2020-06-30 16 16
17 2020-07-01 14 14
18 2020-07-02 343 343

Choosing the values in the column based on the maximum values of other column

I am choosing the values in Pandas DataFrame.
I would like to choose the values in the columns 'One_T','Two_T','Three_T'(which means the total counts), based on the Ratios of the columns('One_R','Two_R','Three_R').
Comparing values is done by the columns('One_R','Two_R','Three_R') and choosing values will be done by columns ('One_T','Two_T','Three_T').
I would like to find the highest values among columns('One_R','Two_R','Three_R') and put values from columns 'One_T','Two_T','Three_T' in new column 'Highest'.
For example, the first row has the highest values in One_R than Two_R and Three_R.
Then, the values in One_T will be filled the column named Highest.
The initial data frame is test below code and the desired result is the result in the below code.
test = pd.DataFrame([[150,30,140,20,120,19],[170,31,130,30,180,22],[230,45,100,50,140,40],
[140,28,80,10,60,10],[100,25,80,27,50,23]], index=['2019-01-01','2019-02-01','2019-03-01','2019-04-01','2019-05-01'],
columns=['One_T','One_R','Two_T','Two_R','Three_T','Three_R'])
One_T One_R Two_T Two_R Three_T Three_R
2019-01-01 150 30 140 20 120 19
2019-02-01 170 31 130 30 180 22
2019-03-01 230 45 100 50 140 40
2019-04-01 140 28 80 10 60 10
2019-05-01 100 25 80 27 50 23
result = pd.DataFrame([[150,30,140,20,120,19,150],[170,31,130,30,180,22,170],[230,45,100,50,140,40,100],
[140,28,80,10,60,10,140],[100,25,80,27,50,23,80]], index=['2019-01-01','2019-02-01','2019-03-01','2019-04-01','2019-05-01'],
columns=['One_T','One_R','Two_T','Two_R','Three_T','Three_R','Highest'])
One_T One_R Two_T Two_R Three_T Three_R Highest
2019-01-01 150 30 140 20 120 19 150
2019-02-01 170 31 130 30 180 22 170
2019-03-01 230 45 100 50 140 40 100
2019-04-01 140 28 80 10 60 10 140
2019-05-01 100 25 80 27 50 23 80
Is there any way to do this?
Thank you for time and considerations.
You can solve this using df.filter to select columns with the _R suffix, then idxmax. Then replace _R with _T and use df.lookup:
s = test.filter(like='_R').idxmax(1).str.replace('_R','_T')
test['Highest'] = test.lookup(s.index,s)
print(test)
One_T One_R Two_T Two_R Three_T Three_R Highest
2019-01-01 150 30 140 20 120 19 150
2019-02-01 170 31 130 30 180 22 170
2019-03-01 230 45 100 50 140 40 100
2019-04-01 140 28 80 10 60 10 140
2019-05-01 100 25 80 27 50 23 80

Pandas: how to drop the lowest 5th percentile for each indexed group?

I have the following issue with python pandas (I am relatively new to it): I have a simple dataset with a column for date, and a corresponding column of values. I am able to sort this Dataframe by date and value by doing the following:
df = df.sort_values(['date', 'value'],ascending=False)
I obtain this:
date value
2019-11 100
2019-11 89
2019-11 87
2019-11 86
2019_11 45
2019_11 33
2019_11 24
2019_11 11
2019_11 8
2019_11 5
2019-10 100
2019-10 98
2019-10 96
2019-10 94
2019_10 94
2019_10 78
2019_10 74
2019_10 12
2019_10 3
2019_10 1
Now, what I would like to do, is to get rid of the lowest fifth percentile for the value column for EACH month (each group). I know that I should use a groupby method, and perhaps also a function:
df = df.sort_values(['date', 'value'],ascending=False).groupby('date', group_keys=False).apply(<???>)
The ??? is where I am struggling. I know how to suppress the lowest 5th percentile on a sorted Dataframe as a WHOLE, for instance by doing:
df = df[df.value > df.value.quantile(.05)]
This was the object of another post on StackOverflow. I know that I can also use numpy to do this, and that it is much faster, but my issue is really how to apply that to EACH GROUP independently (each portion of the value column sorted by month) in the Dataframe, not just the whole Dataframe.
Any help would be greatly appreciated
Thank you so very much,
Kind regards,
Berti
Use GroupBy.transform with lambda function for Series with same size like original DataFrame, so possible filter by boolean indexing:
df = df.sort_values(['date', 'value'],ascending=False)
q = df.groupby('date')['value'].transform(lambda x: x.quantile(.05))
df = df[df.value > q]
print (df)
date value
4 2019_11 45
5 2019_11 33
6 2019_11 24
7 2019_11 11
8 2019_11 8
14 2019_10 94
15 2019_10 78
16 2019_10 74
17 2019_10 12
18 2019_10 3
0 2019-11 100
1 2019-11 89
2 2019-11 87
10 2019-10 100
11 2019-10 98
12 2019-10 96
You could create your own function and apply it:
def remove_bottom_5_pct(arr):
thresh = np.percentile(arr, 5)
return arr[arr > thresh]
df.groupby('date', sort=False)['value'].apply(remove_bottom_5_pct)
[out]
date
2019-11 0 100
1 89
2 87
3 86
4 45
5 33
6 24
7 11
8 8
2019-10 10 100
11 98
12 96
13 94
14 94
15 78
16 74
17 12
18 3
Name: value, dtype: int64

In the output I am not getting the complete table from Excel

I just started using pandas, i wanted to import one Excel file with 31 rows and 11 columns, but in the output only some columns are displayed, the middle columns are represented by "....", and the first column 'EST' the starting few elements are displayed "00:00:00".
Code
import pandas as pd
df = pd.read_excel("C:\\Users\daryl\PycharmProjects\pandas\Book1.xlsx")
print(df)
Output
C:\Users\daryl\AppData\Local\Programs\Python\Python37\python.exe "C:/Users/daryl/PycharmProjects/pandas/1. Introduction.py"
EST Temperature ... Events WindDirDegrees
0 2016-01-01 00:00:00 38 ... NaN 281
1 2016-02-01 00:00:00 36 ... NaN 275
2 2016-03-01 00:00:00 40 ... NaN 277
3 2016-04-01 00:00:00 25 ... NaN 345
4 2016-05-01 00:00:00 20 ... NaN 333
5 2016-06-01 00:00:00 33 ... NaN 259
6 2016-07-01 00:00:00 39 ... NaN 293
7 2016-08-01 00:00:00 39 ... NaN 79
8 2016-09-01 00:00:00 44 ... Rain 76
9 2016-10-01 00:00:00 50 ... Rain 109
10 2016-11-01 00:00:00 33 ... NaN 289
11 2016-12-01 00:00:00 35 ... NaN 235
12 1-13-2016 26 ... NaN 284
13 1-14-2016 30 ... NaN 266
14 1-15-2016 43 ... NaN 101
15 1-16-2016 47 ... Rain 340
16 1-17-2016 36 ... Fog-Snow 345
17 1-18-2016 25 ... Snow 293
18 1/19/2016 22 ... NaN 293
19 1-20-2016 32 ... NaN 302
20 1-21-2016 31 ... NaN 312
21 1-22-2016 26 ... Snow 34
22 1-23-2016 26 ... Fog-Snow 42
23 1-24-2016 28 ... Snow 327
24 1-25-2016 34 ... NaN 286
25 1-26-2016 43 ... NaN 244
26 1-27-2016 41 ... Rain 311
27 1-28-2016 37 ... NaN 234
28 1-29-2016 36 ... NaN 298
29 1-30-2016 34 ... NaN 257
30 1-31-2016 46 ... NaN 241
[31 rows x 11 columns]
Process finished with exit code 0
To answer your question about the display of only a few columns and "..." :
All of the columns have been properly ingested, but your screen / the console is not wide enough to output all of the columns at once in a "print" fashion. This is normal/expected behavior.
Pandas is not a spreadsheet visualization tool like Excel. Maybe someone can suggest a tool for visualizing dataframes in a spreadsheet format for Python, like in Excel. I think I've seen people visualizing spreadsheets in Spyder but I don't use that myself.
If you want to make sure all of the columns are there, try using list(df) or print(list(df)).
To answer your question about the EST format:
It looks like you have some data cleaning to do. This is typical work in data science. I am not sure how to best do this - I haven't worked much with dates/datetime yet. However, here is what I see:
The first few items have timestamps as well, likely formatted in HH:MM:SS
They are formatted YYYY-MM-DD
On index row 18, there are / instead of - in the date
The remaining rows are formatted M-DD-YYYY
There's an option on read_csv's documentation that may take care of those automatically. It's called "parse_dates". If you turn that option on like pd.read_csv('file location', parse_dates='EST'), that could turn on the date parser for the EST column and maybe solve your problem.
Hope this helps! This is my first answer to anyone who sees it feel free to edit and improve it.

Bash Scripting - Nested Loop Taking Incorrect Values

I have a bash script as below:
day=(58 34 107 91 43 39 41 76 37 47 70 74 56 19 95 38 48 96 50 76 89 79 46 105 26 88 69 87 23 82 99 77 114 52 87 63 33 52 57 45 48 49 55 60 34 107 48 40 25 20 16)
year=(1952 1953 1954 1955 1956 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004)
for dom in $day; do
for yrs in $year; do
ncks -O -d time,$dom imdJJAS$yrs.nc ac_$yrs.nc
done
done
Basically i am trying to extract the time dimension for each year using the NCO ncks command, the scripts run but the outputs are incorrect. For year 1951, it succesfully extracted the 58th time value, but from 1952 onwards, it extracts the last value in the day array (16), which is incorrect.
ive tried setting {$day[a]} since its an array, but if i used this, for all years in extracts the last value in the array instead.
I am not too sure what im doing wrong, ive looked through quite a few post regarding this, but it doest seem to be working.
Id appreciate any help.
Cheers!
$array by itself will expand to the first element in the array. To expand to the full array you should use ${array[#]}:
for dom in "${day[#]}"; do
for yrs in "${year[#]}"; do
ncks -O -d "time,${dom}" "imdJJAS${yrs}.nc" "ac_${yrs}.nc"
done
done
I also quoted your variable expansions and changed $dom and $yrs to ${dom} and ${yrs}. The later is done to prevent mistakenly referring to an undefined variable $dom_abc is not the same as ${dom}_abc
If I understand your intention correctly, you are trying to use corresponding values from both arrays. In that case you need a numerical index. for VAR in ARRAY iterates over all values of the array.

Resources