with python3 need to Draw a count plot to show the number of each type of crime discovered each year - python-3.x

i need to Draw a count plot to show the number of each type of crime discovered each year some columns from csv file
i have 2 columns will make process only on it (primary type and date )
so any help to can implement in python

Try This,
df=pd.read_csv('FileName.csv')
df1 = df[['ColumnName1','ColumnName2']]
print(df1)
plt.xlabel('ColumnName1')
plt.ylabel('ColumnName2')
a=plt.bar(df1['ColumnName1'], df1['ColumnName2'])
plt.show()

Related

Use KDTree/KNN Return Closest Neighbors

I have two python pandas dataframes. One contains all NFL Quarterbacks' College Football statistics since 2007 and a label on the type of player they are (Elite, Average, Below Average). The other dataframe contains all of the college football qbs' data from this season along with a prediction label.
I want to run some sort of analysis to determine the two closest NFL comparisons for every college football qb based on their labels. I'd like to add to two comparable qbs as two new columns to the second dataframe.
The feature names in both dataframes are the same. Here is what the dataframes look like:
Player Year Team GP Comp % YDS TD INT Label
Player A 2020 ASU 12 65.5 3053 25 6 Average
For the example above, I'd like two find the two closest neighbors to Player A that also have the label "Average" from the first dataframe.
The way I thought of doing this was to use Scipy's KDTree and run a query tree:
tree = KDTree(nfl[features], leafsize=nfl[features].shape[0]+1)
closest = []
for row in college.iterrows():
distances, ndx = tree.query(row[features], k=2)
closest.append(ndx)
print(closest)
However, the print statement returned an empty list. Is this the right way to solve my problem?
.iterrows(), will return namedtuples (index, Series) where index is obviously the index of the row, and Series is the features values with the index of those being the columns names (see below).
As you have it, row is being stored as that tuple, so when you have row[features], that won't really do anything. What you're really after is that Series which the features and values Ie row[1]. So you can either call that directly, or just break them up in your loop by doing for idx, row in df.iterrows():. Then you can just call on that Series row.
Scikit learn is a good package here to use (actually built on Scipy so you'll notice same syntax). You'll have to edit the code to your specifications (like filter to only have the "Average" players, maybe you are one-hot encoding the category columns and in that case may need to add that to the features,etc.), but to give you an idea (And I made up these dataframes just for an example...actually the nfl one is accurate, but the college completely made up), you can see below using the kdtree and then taking each row in the college dataframe to see which 2 values it's closest to in the nfl dataframe. I obviously have it print out the names, but as you can see with print(closest), the raw arrays are there for you.
import pandas as pd
nfl = pd.DataFrame([['Tom Brady','1999','Michigan',11,61.0,2217,16,6,'Average'],
['Aaron Rodgers','2004','California',12,66.1,2566,24,8,'Average'],
['Payton Manning','1997','Tennessee',12,60.2,3819,36,11,'Average'],
['Drew Brees','2000','Perdue',12,60.4,3668,26,12,'Average'],
['Dan Marino','1982','Pitt',12,58.5,2432,17,23,'Average'],
['Joe Montana','1978','Notre Dame',11,54.2,2010,10,9,'Average']],
columns = ['Player','Year','Team','GP','Comp %','YDS','TD','INT','Label'])
college = pd.DataFrame([['Joe Smith','2019','Illinois',11,55.6,1045,15,7,'Average'],
['Mike Thomas','2019','Wisconsin',11,67,2045,19,11,'Average'],
['Steve Johnson','2019','Nebraska',12,57.3,2345,9,19,'Average']],
columns = ['Player','Year','Team','GP','Comp %','YDS','TD','INT','Label'])
features = ['GP','Comp %','YDS','TD','INT']
from sklearn.neighbors import KDTree
tree = KDTree(nfl[features], leaf_size=nfl[features].shape[0]+1)
closest = []
for idx, row in college.iterrows():
X = row[features].values.reshape(1, -1)
distances, ndx = tree.query(X, k=2, return_distance=True)
closest.append(ndx)
collegePlayer = college.loc[idx,'Player']
closestPlayers = [ nfl.loc[x,'Player'] for x in ndx[0] ]
print ('%s closest to: %s' %(collegePlayer, closestPlayers))
print(closest)
Output:
Joe Smith closest to: ['Joe Montana', 'Tom Brady']
Mike Thomas closest to: ['Joe Montana', 'Tom Brady']
Steve Johnson closest to: ['Dan Marino', 'Tom Brady']

How to make a categorical count bar plot with time on x-axis

I want to count the number of occurrences of categories in a variable and plot it against time.
The data looks like following:
Date_column Categorical_variable
20-01-2019 A
20-01-2019 B
20-01-2019 C
21-01-2019 A
21-02-2019 A
22-02-2019 B
........................
23-04-2020 A
I want to show that in month of Jan I had 1 occurrence of B/C whereas 2 occurrences of A. In feb, I had 1 occurrence of A/B and so on. The bar plots can be stacked to know the total number of occurrences.
I've been very close to it. But haven't been able to draw plot out of it.
df['Date_column'].groupby([df.Date_column.dt.year, df.Date_column.dt.month]).agg('count')
The other way is to change the dates to 1st of every month, and then group by to count a occurence. But I'm unable to draw plot out of it.
df.groupby(df['Date_column'], df['Categorical_variable']).count()
Use crosstab with Series.dt.to_period:
df['Date_column'] = pd.to_datetime(df['Date_column'])
df = pd.crosstab(df['Date_column'].dt.to_period('m'), df['Categorical_variable'])
df.plot.bar()

Graphing three database in one graph Python

How can I plot the graph
Getting the data from those 3 sources
Using only first letter and last digits of the first column to put it in the X-axis as in the Excel graph above
How can I only show first column data by 20 digits difference ? aa010 aa030 aa050 ... etc
I have three different data from a source. Each one of them has 2 columns. Some of those 3 sources' first columns named the same but each one of them has different data corresponding to it in the second column.
I need to use python to plot those 3 data at one graph.
X-axis should be the combination of the first column of three data from the sources. - The data is in format of: aa001 - (up to sometimes aa400); ab001 - (up to sometimes ab400).
So, the X-axis should start with a aa001 and end with ab400. Since it would just overfill the x-axis and would make it impossible to look at it in a normal size, I want to just show aa020, aa040 ..... (using the number in the string, only show it after aa0(+20) or ab0(+20))
Y-axis should be just numbers from 0-10000 (may want to change if at least one of the data has max more than 10000.
I will add the sample graph I created using excel.
My sample data would be (Note: Data is not sorted by any column and I would prefer to sort it as stated above: aa001 ...... ab400):
Data1
Name Number
aa001 123
aa032 4211
ab400 1241
ab331 33
Data2
Name Number
aa002 1213
aa032 41
ab378 4231
ab331 63
aa163 999
Data3
Name Number
aa209 9876
ab132 5432
ab378 4124
aa031 754
aa378 44
ab344 1346
aa222 73
aa163 414
ab331 61
I searched up Matplotlib, found a sample example where it plots as I want (with dots for each x-y point) but does not apply to my question.
This is the similar code I found:
x = range(100)
y = range(100,200)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(x[:4], y[:4], s=10, c='b', marker="s", label='first')
ax1.scatter(x[40:],y[40:], s=10, c='r', marker="o", label='second')
plt.legend(loc='upper left');
plt.show()
Sample graph (instead of aa for X-axis-> bc; ab -> mc)
I expect to see a graph as follows, but skipping every 20 in the X-axis. (I want the first graph dotted (symbolled) as the second graph but second graph to use X-axis as the first graph, but with skipping 20 in the name
First Graph ->- I want to use X-axis like this but without each data (only by 20 difference)
Second graph ->- I want to use symbols instead of lines like in this one
Please, let me know if I need to provide any other information or clarify/correct myself. Any help is appreciated!
The answer is as following but the following code has still some errors. The final answer will be posted after receiving complete answer at The answer will be in the following link:
Using sorted file to plot X-axis with corresponding Y-values from the original file
from matplotlib import pyplot as plt
import numpy as np
import csv
csv_file = []
with open('hostnum.csv', 'r') as f:
csvreader = csv.reader(f)
for line in csvreader:
csv_file.append(line)
us_csv_file = []
with open('unsorted.csv', 'r') as f:
csvreader = csv.reader(f)
for line in csvreader:
us_csv_file.append(line)
us_csv_file.sort(key=lambda x: csv_list.index(x[1]))
plt.plot([int(item[1]) for item in csvfile], 'o-')
plt.xticks(np.arange(len(csvfile)), [item[0] for item in csvfile])
plt.show()

How to force plotly plots to correct starting point on x axis?

I'm plotting the sales numbers (amount) per week YYYYWW per product product_name.
All the data appears on the graph, however some of the products are showing incorrectly. If product A only started having sales figures from year 2019 (ie no sales figures for the whole of 2018); then I want the line for that product to be zero in 2018 and begin showing values from 2019.
What's happening instead is Product A is showing the line graph from the origin of the graph. So week 1 of sales is at YYYYWW 201801 instead.
Is there a more efficient way to solve this than to place zero values for the product with a list comprehension?
import plotly.graph_objs as go
import plotly.offline as pyo
data = [go.Scatter(x=sorted(df.YYYYWW.unique().astype(str)),
y=list(df.loc[df.product_name == 'Product A',
['amount','YYYYWW']].groupby('YYYYWW').sum().amount),
mode='lines+markers',
)
]
pyo.plot(data)
The values in x are: 201801, 201802, ... 201920
The values in y are:
YYYYWW amount
2019/15 454.32
2019/16 1131.15
2019/17 1152.96
2019/18 2822.77
2019/19 3580.86
2019/20 2265.06
solved it!
My x values should be taken from a subset of the dataframe just as done in my y values:
x = df.loc[df.product_name == i].YYYYWW.unique().astype(str)

Plot number of occurrences in Pandas dataframe (2)

this is a followup from the previous question: Plot number of occurrences from Pandas DataFrame
I'm trying to produce a bar chart in descending order from the results of a pandas dataframe that is grouped by "Issuing Office." The data comes from a csv file which has 3 columns: System (string), Issuing Office (string), Error Type (string). The first four commands work fine - read, fix the column headers, strip out the offices I don't need, and reset the index. However I've never displayed a chart before.
CSV looks like:
System Issuing Office Error Type
East N1 Error1
East N1 Error1
East N2 Error1
West N1 Error3
Looking for a simple horizontal bar chart that would show N1 had a count of 3, N2 had a count of 2.
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('mydatafile.csv',index_col=None, header=0) #ok
df.columns = [c.replace(' ','_') for c in df.columns] #ok
df = df[df['Issuing_Office'].str.contains("^(?:N|M|V|R)")] #ok
df = df.reset_index(drop=True) #ok
# produce chart that shows how many times an office came up (Decending)
df.groupby([df.index, 'Issuing_Office']).count().plot(kind='bar')
plt.show()
# produce chart that shows how many error types per Issuing Office (Descending).
There are no date fields in this which makes it different than the original question. Any help is greatly appreciated :)
JohnE's solution worked. Used the code:
# produce chart that shows how many times an office came up (Decending)
df['Issuing_Office'].value_counts().plot(kind='barh') #--JohnE
plt.gca().invert_yaxis()
plt.show()
# produce chart that shows how many error types per Issuing Office N1 (Descending).
dfN1 = df[df['Issuing_Office'].str.contains('N1')]
dfN1['Error_Type'].value_counts().plot(kind='barh')
plt.gca().invert_yaxis()
plt.show()

Resources