Stacked plot with spaced xticks - python-3.x

I have done an aggregation which resulted in the following dataframe
df2 = tweet.groupby(['Realdate', 'Type'])['Text'].count().unstack().fillna(0)
Type BLM Black
Realdate
2020-03-01 21.0 9.0
2020-03-02 20.0 13.0
2020-03-03 32.0 16.0
2020-03-04 3.0 9.0
2020-03-05 28.0 16.0
... ... ...
2020-07-10 4050.0 4474.0
2020-07-11 2815.0 3743.0
2020-07-12 3575.0 3863.0
2020-07-13 3435.0 4704.0
2020-07-14 3284.0 4352.0
I then created a stacked plot as follows:
df2[['BLM','Black']].plot(kind='bar', stacked=True, figsize=(20,10))
The output is:
I have too many days and i am struggling to space the xticks. Can someone help me please?
I was tempted to replace my xticks and generate new ones but i have been unsuccessful so far.
Thanks very much

After a lot of research, I found this answer
sum_df=tweet.groupby(['Realdate', 'Type'])['Text'].count().unstack().fillna(0)
sum_df=sum_df.reset_index()
print(sum_df)
fig, ax1 = plt.subplots(figsize=(15, 10))
ax1.set_xlabel('Dates')
ax1.set_ylabel('# of Tweets', color='b')
ax1.yaxis.tick_left()
sum_df[['Black', 'BLM']].plot( kind='bar', stacked=True, ax=ax1, figsize=(20,10))
ax1.legend(loc='upper left', fontsize=8)
ax1.set_xticklabels(sum_df.Realdate, rotation=90)
for label in ax1.xaxis.get_ticklabels()[::2]:
label.set_visible(False)
plt.legend(['#BlackLivesMatter', '#BLM'])
plt.title('# of Tweets per Day')
plt.show()

Related

substract two ECDF time series

Hi I have a ECDF plot by seaborn which is the following.
I can obtain this by doing sns.ecdfplot(data=df2, x='time', hue='seg_oper', stat='count').
My dataframe is very simple:
In [174]: df2
Out[174]:
time seg_oper
265 18475 1->0:ADD['TX']
2342 78007 0->1:ADD['RX']
2399 78613 1->0:DELETE['TX']
2961 87097 0->1:ADD['RX']
2994 87210 0->1:ADD['RX']
... ... ...
330823 1002281 1->0:DELETE['TX']
331256 1003545 1->0:DELETE['TX']
331629 1004961 1->0:DELETE['TX']
332375 1006663 1->0:DELETE['TX']
333083 1008644 1->0:DELETE['TX']
[834 rows x 2 columns]
How can I substract series 0->1:ADD['RX'] from 1->0:DELETE['TX']?
I like seaborn because most of this data mangling is done inside the library, but in this case I need to substract these two series ...
Thanks.
So the first thing is to obtain what seaborn does, but manually. After that (because I need to) I can subtract one series from the other.
Cumulative Count
First we need to obtain a cumulative count per each series.
In [304]: df2['cum'] = df2.groupby(['seg_oper']).cumcount()
In [305]: df2
Out[305]:
time seg_oper cum
265 18475 1->0:ADD['TX'] 0
2961 87097 0->1:ADD['RX'] 1
2994 87210 0->1:ADD['RX'] 2
... ... ... ...
332375 1006663 1->0:DELETE['TX'] 413
333083 1008644 1->0:DELETE['TX'] 414
Pivot the data
Rearrange the DF.
In [307]: df3 = df2.pivot(index='time', columns='seg_oper',values='cum').reset_index()
In [308]: df3
Out[308]:
seg_oper time 0->1:ADD['RX'] 1->0:ADD['TX'] 1->0:DELETE['TX']
0 18475 NaN 0.0 NaN
1 78007 0.0 NaN NaN
2 78613 NaN NaN 0.0
3 87097 1.0 NaN NaN
4 87210 2.0 NaN NaN
.. ... ... ... ...
828 1002281 NaN NaN 410.0
829 1003545 NaN NaN 411.0
830 1004961 NaN NaN 412.0
831 1006663 NaN NaN 413.0
832 1008644 NaN NaN 414.0
[833 rows x 4 columns]
Fill the gaps
I'm assuming that the NaN values can be filled with the previous value of the row until the next one.
df3=df3.fillna(method='ffill')
At this point, if you plot df3 you'll obtain the same as doing sns.ecdfplot(df2) with seaborn.
I still want to substract one series from the other.
df3['diff'] = df3["0->1:ADD['RX']"] - df3["1->0:DELETE['TX']"]
df3.plot(x='time')
The following plot, is the result.
pd: I don't understand the negative vote on the question. If someone can explain, I'll appreciate it.

Unable to convert text format to proper data frame using Pandas

I am reading text source from URL = 'https://www.census.gov/construction/bps/txt/tb2u201901.txt'
here i used Pandas to convert it into Dataframe
df = pd.read_csv(URL, sep = '\t')
After exporting the df i see all the columns are merged into single column inspite of giving the seperator
as '\t'. how to solve this issue.
As your file is not a CSV file, you should use the function read_fwf() from pandas because your columns have a fixed width. You need also to remove the first 12 lines that are not part of your data and you need to remove the empty lines with dropna().
df = pd.read_fwf(URL, skiprows=12)
df.dropna(inplace=True)
df.head()
United States 94439 58086 1600 1457 33296 1263
1 Northeast 9099.0 3330.0 272.0 242.0 5255.0 242.0
2 New England 1932.0 1079.0 90.0 72.0 691.0 46.0
3 Connecticut 278.0 202.0 8.0 3.0 65.0 8.0
4 Maine 357.0 222.0 6.0 0.0 129.0 5.0
5 Massachusetts 819.0 429.0 38.0 54.0 298.0 23.0
Your output is coming correct . If you open the URL , you will see that there sentences written which are not tab separated so its not able to present in correct way.
From line number 9 the results are correct
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/2K61J.png

Removing outliers based on column variables or multi-index in a dataframe

This is another IQR outlier question. I have a dataframe that looks something like this:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=('red','yellow','green'))
df.loc[0:49,'Season'] = 'Spring'
df.loc[50:99,'Season'] = 'Fall'
df.loc[0:24,'Treatment'] = 'Placebo'
df.loc[25:49,'Treatment'] = 'Drug'
df.loc[50:74,'Treatment'] = 'Placebo'
df.loc[75:99,'Treatment'] = 'Drug'
df = df[['Season','Treatment','red','yellow','green']]
df
I would like to find and remove the outliers for each condition (i.e. Spring Placebo, Spring Drug, etc). Not the whole row, just the cell. And would like to do it for each of the 'red', 'yellow', 'green' columns.
Is there way to do this without breaking the dataframe into a whole bunch of sub dataframes with all of the conditions broken out separately? I'm not sure if this would be easier if 'Season' and 'Treatment' were handled as columns or indices. I'm fine with either way.
I've tried a few things with .iloc and .loc but I can't seem to make it work.
If need replace outliers by missing values use GroupBy.transform with DataFrame.quantile, then compare for lower and greater values by DataFrame.lt and DataFrame.gt, chain masks by | for bitwise OR and set missing values in DataFrame.mask, default replacement, so not specified:
np.random.seed(2020)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=('red','yellow','green'))
df.loc[0:49,'Season'] = 'Spring'
df.loc[50:99,'Season'] = 'Fall'
df.loc[0:24,'Treatment'] = 'Placebo'
df.loc[25:49,'Treatment'] = 'Drug'
df.loc[50:74,'Treatment'] = 'Placebo'
df.loc[75:99,'Treatment'] = 'Drug'
df = df[['Season','Treatment','red','yellow','green']]
g = df.groupby(['Season','Treatment'])
df1 = g.transform('quantile', 0.05)
df2 = g.transform('quantile', 0.95)
c = df.columns.difference(['Season','Treatment'])
mask = df[c].lt(df1) | df[c].gt(df2)
df[c] = df[c].mask(mask)
print (df)
Season Treatment red yellow green
0 Spring Placebo NaN NaN 67.0
1 Spring Placebo 67.0 91.0 3.0
2 Spring Placebo 71.0 56.0 29.0
3 Spring Placebo 48.0 32.0 24.0
4 Spring Placebo 74.0 9.0 51.0
.. ... ... ... ... ...
95 Fall Drug 90.0 35.0 55.0
96 Fall Drug 40.0 55.0 90.0
97 Fall Drug NaN 54.0 NaN
98 Fall Drug 28.0 50.0 74.0
99 Fall Drug NaN 73.0 11.0
[100 rows x 5 columns]

How to plot multiple charts using matplotlib from unstacked dataframe with Pandas

This is a sample of the dataset I have using the following piece of code
ComplaintCity = nyc_df.groupby(['City','Complaint Type']).size().sort_values().unstack()
top5CitiesByComplaints = ComplaintCity[top5Complaints].rename_axis(None, axis=1)
top5CitiesByComplaints
Blocked Driveway Illegal Parking Noise - Street/Sidewalk Noise - Commercial Derelict Vehicle
City
ARVERNE 35.0 58.0 29.0 2.0 27.0
ASTORIA 2734.0 1281.0 500.0 1554.0 363.0
BAYSIDE 377.0 514.0 15.0 40.0 198.0
BELLEROSE 95.0 106.0 13.0 37.0 89.0
BREEZY POINT 3.0 15.0 1.0 4.0 3.0
BRONX 12754.0 7859.0 8890.0 2433.0 1952.0
BROOKLYN 28147.0 27461.0 13354.0 11458.0 5179.0
CAMBRIA HEIGHTS 147.0 76.0 25.0 12.0 115.0
CENTRAL PARK NaN 2.0 95.0 NaN NaN
COLLEGE POINT 435.0 352.0 33.0 35.0 184.0
CORONA 2761.0 660.0 238.0 248.0
I want to be able to plot the same as a horizontal bar chart for each complaint. It should display the Cities with the highest count of complaints. Something similar to the image below. I am not sure how to go about it.
You can create a list of axis instances with subplots and plot the columns one-by-one:
fig, axes = plt.subplots(3,2,figsize=(10,6))
for c,ax in zip(df.columns, axes.ravel()):
df[c].sort_values().plot.barh(ax=ax)
fig.tight_layout()
Then you would get something like this:

How do I fill these `NaN` values properly?

Here's my original dataframe with NaN values which I'm trying to fill;
https://prnt.sc/i40j33
If I use df.interpolate(axis=1) to fill up the NaN values, only some of the rows fill up properly with a number.
For e.g
https://prnt.sc/i40mgq
As you can see in the screenshot column:1981 and row:3 which had a NaN value has filled up properly with a value other than NaN. I want to fill the rest of NaN as well like that? Any idea how do I do that?
Using DataFrame.interpolate()
In your case it is failing because there are no columns to the left, and therefore the interpolate method doesn't know what to interpolate it to: missing_value = (left_value + right_value)/2
So you could, for example, insert a column to the left with all 0's (if you would like to impute your missing values on the first column with half of the next value), as such:
df.insert(loc=0, column='allZeroes', value=0)
After this, you could interpolate as you are doing and remove the column
General missing value imputation
Either use df.fillna('DEFAULT-VALUE') as Alex mentioned in the comments to the question. Docs here
or do something like:
df.my_col[df.my_col.isnull()] = 'DEFAULT-VALUE'
I'd recommend using the fillna as you can use methods such as forward fill (ffill) -- impute the missings with the previous value -- and other similar methods.
It seems like you might want to interpolate on axis=0, column-wise:
>>> df = pd.DataFrame(np.arange(35, dtype=float).reshape(5,7),
columns=[1951, 1961, 1971, 1981, 1991, 2001, 2001],
index=range(0, 5))
>>> df.iloc[1:3, 0] = np.nan
>>> df.iloc[3, 3] = np.nan
>>> df.interpolate(axis=0)
1951 1961 1971 1981 1991 2001 2001
0 0.0 1.0 2.0 3.0 4.0 5.0 6.0
1 7.0 8.0 9.0 10.0 11.0 12.0 13.0
2 14.0 15.0 16.0 17.0 18.0 19.0 20.0
3 21.0 22.0 23.0 24.0 25.0 26.0 27.0
4 28.0 29.0 30.0 31.0 32.0 33.0 34.0
Currently you're interpolating row-wise. NaNs that "begin" a Series aren't padded by a value on either side, making interpolation impossible for them.
Update: pandas is adding some more optionality for this in v 0.23.0.

Resources