Plotting UK Districts, Postcode Areas and Regions - python-3.x

I am wondering if we can do similar choropleth as below with UK District, Postcode Area and Region map.
It would be great if you could show an example for UK choropleths.
Geographic shape files can be downloaded from http://martinjc.github.io/UK-GeoJSON/
state_geo = os.path.join('data', 'us-states.json')
state_unemployment = os.path.join('data', 'US_Unemployment_Oct2012.csv')
state_data = pd.read_csv(state_unemployment)
j1 = pd.read_json(state_geo)
from branca.utilities import split_six
threshold_scale = split_six(state_data['Unemployment'])
m = folium.Map(location=[48, -102], zoom_start=3)
m.choropleth(
geo_path=state_geo,
geo_str='choropleth',
data=state_data,
columns=['State', 'Unemployment'],
key_on='feature.id',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Unemployment Rate (%)'
)
m
m.save('choropleth.html')

This is what I did.
First, collect your data. I used www.nomisweb.co.uk to collect employment rates for the main regions:
North East (England)
North West (England)
Yorkshire and The Humber
East Midlands (England)
West Midlands (England)
East of England
London South East (England)
South West (England)
Wales Scotland
Northern Ireland
I saved this dataset as UKEmploymentData.csv. Note that you will have to change the region names to match the geo data id's.
Then I followed what you posted using the NUTS data from the ONS geoportal.
import pandas as pd
import os
import json
# read in population data
df = pd.read_csv('UKEmploymentData.csv')
import folium
from branca.utilities import split_six
state_geo = 'http://geoportal1-ons.opendata.arcgis.com/datasets/01fd6b2d7600446d8af768005992f76a_4.geojson'
m = folium.Map(location=[55, 4], zoom_start=5)
m.choropleth(
geo_data=state_geo,
data=df,
columns=['region', 'Total in employment - aged 16 and over'],
key_on='feature.properties.nuts118nm',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Employment Rate (%)',
highlight=True
)
m

Related

Ordering values across different groups in Pandas

I am trying to order values of different cars across different regions, as an example. Following is the sample data set.
import pandas as pd
region = ['east','west', 'central', 'east', 'west', 'central', 'east', 'west', 'central']
automobile = ['bmw', 'bmw', 'bmw', 'tesla', 'tesla', 'tesla', 'lucid', 'lucid', 'lucid']
price = [250, 350, 300, 500, 550, 575, 950, 900, 850]
df_test = pd.DataFrame({'region':region,
'automobile':automobile,
'price':price} )
display(df_test)
I would like to make sure that for each automobile, the price across three reqions is synchronized such
that East <= Central <= West (as they are for BMW). If they are not sync'd', price on the East should be
the base price. Eg. for Lucid, its price in Central should be 950 and then in West should be 950 as well. For Testla,
the price in West needs to be raised to match Central i.e. 575.
I think I should use GROUPBY but just cant make any progress. I imagine that a function like ffill() could be used after pivoting the data, but I hope there is a simpler solution.
Any help would appreciated.
Thank you
You can use cummax with groupby, but you need to sort your data in the correct order with categorical dtype:
# assign the order for the regions
df_test['region'] = pd.Categorical(df_test['region'], ordered=True, categories=['east','central', 'west'])
df['price'] = (df_test.sort_values(['automobile','region']) # sort data in the correct order
.groupby('automobile')['price'].cummax() # use cummax to correct the values
)
Output:
region automobile price
0 east bmw 250
1 west bmw 350
2 central bmw 300
3 east tesla 500
4 west tesla 575
5 central tesla 575
6 east lucid 950
7 west lucid 950
8 central lucid 950

How to create a Rank Column with recurring rank number? (Excel)

I would be happy if you would like to check the picture bellow first so you might clearly and directly understand my question.
I want to generate a field that ranking every state according to its assigned region
These are my inputs:
| Region | State |
West California
West Arizona
West Washington
East New York
East Florida
East North Carolina
South Texas
South Louisiana
South Alabama
I would like to generate the "Rank State" field
| Region | State | Rank State |
West California 1
West Arizona 2
West Washington 3
East New York 1
East Florida 2
East North Carolina 3
South Texas 1
South Louisiana 2
South Alabama 3
the question is: what calculation or method can do the "rank state" column/field?
I'd be Happy to accept excel solutions if it is possible :)
The way I see it, you want to count how many states above or including the selected one are in the same region?
Assuming 'Region' is Column A (in excel)
in row 2 in the Rank column, paste:
=COUNTIF($A$2:$A2, $A2)
Then autofill it down the column (double-click or drag the little green square at the bottom right of the selected cell)

Python: how to remove footnotes when loading data, and how to select the first when there is a pair of numbers

I am new to python and looking for help.
resp =requests.get("https://en.wikipedia.org/wiki/World_War_II_casualties")
soup = bs.BeautifulSoup(resp.text)
table = soup.find("table", {"class": "wikitable sortable"})
deaths = []`
for row in table.findAll('tr')[1:]:
death = row.findAll('td')[5].text.strip()
deaths.append(death)
It comes out as
'30,000',
'40,400',
'',
'88,000',
'2,000',
'21,500',
'252,600',
'43,600',
'15,000,000[35]to 20,000,000[35]',
'100',
'340,000 to 355,000',
'6,000',
'3,000,000to 4,000,000',
'1,100',
'83,000',
'100,000[49]',
'85,000 to 95,000',
'600,000',
'1,000,000to 2,200,000',
'6,900,000 to 7,400,000',
...
'557,000',
'5,900,000[115] to 6,000,000[116]',
'40,000to 70,000',
'500,000[39]',
'36,000–50,000',
'11,900',
'10,000',
'20,000,000[141] to 27,000,000[142][143][144][145][146]',
'',
'2,100',
'100',
'7,600',
'200',
'450,900',
'419,400',
'1,027,000[160] to 1,700,000[159]',
'',
'70,000,000to 85,000,000']`
I want to plot a graph, but the [] footnote would completely ruin it. Many of the values are with footnotes. Is it also possible to select the first number when there is a pair in one cell? I'd appreciate if anyone of you could teach me... Thank you
You can use soup.find_next() with text=True parameter, then split/strip accordingly.
For example:
import requests
from bs4 import BeautifulSoup
url = 'https://en.wikipedia.org/wiki/World_War_II_casualties'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for tr in soup.table.select('tr:has(td)')[1:]:
tds = tr.select('td')
if not tds[0].b:
continue
name = tds[0].b.get_text(strip=True, separator=' ')
casualties = tds[5].find_next(text=True).strip()
print('{:<30} {}'.format(name, casualties.split('–')[0].split()[0] if casualties else ''))
Prints:
Albania 30,000
Australia 40,400
Austria
Belgium 88,000
Brazil 2,000
Bulgaria 21,500
Burma 252,600
Canada 43,600
China 15,000,000
Cuba 100
Czechoslovakia 340,000
Denmark 6,000
Dutch East Indies 3,000,000
Egypt 1,100
Estonia 83,000
Ethiopia 100,000
Finland 85,000
France 600,000
French Indochina 1,000,000
Germany 6,900,000
Greece 507,000
Guam 1,000
Hungary 464,000
Iceland 200
India 2,200,000
Iran 200
Iraq 700
Ireland 100
Italy 492,400
Japan 2,500,000
Korea 483,000
Latvia 250,000
Lithuania 370,000
Luxembourg 5,000
Malaya & Singapore 100,000
Malta 1,500
Mexico 100
Mongolia 300
Nauru 500
Nepal
Netherlands 210,000
Newfoundland 1,200
New Zealand 11,700
Norway 10,200
Papua and New Guinea 15,000
Philippines 557,000
Poland 5,900,000
Portuguese Timor 40,000
Romania 500,000
Ruanda-Urundi 36,000
South Africa 11,900
South Pacific Mandate 10,000
Soviet Union 20,000,000
Spain
Sweden 2,100
Switzerland 100
Thailand 7,600
Turkey 200
United Kingdom 450,900
United States 419,400
Yugoslavia 1,027,000
Approx. totals 70,000,000

How to select records with not exists condition in pandas dataframe

I am have two dataframes as below. I want to rewrite the data selection SQL query into pandaswhich contains not exists condition
SQL
Select ORDER_NUM, DRIVER FROM DF
WHERE
1=1
AND NOT EXISTS
(
SELECT 1 FROM
order_addition oa
WHERE
oa.Flag_Value = 'Y'
AND df.ORDER_NUM = oa.ORDER_NUM)
Sample data
order_addition.head(10)
ORDER_NUM Flag_Value
22574536 Y
32459745 Y
15642314 Y
12478965 N
25845673 N
36789156 N
df.head(10)
ORDER_NUM REGION DRIVER
22574536 WEST Ravi
32459745 WEST David
15642314 SOUTH Rahul
12478965 NORTH David
25845673 SOUTH Mani
36789156 SOUTH Tim
How can this be done in pandas easily.
IIUC, you can merge on df1 with values equal to Y, and then find the nans:
result = df2.merge(df1[df1["Flag_Value"].eq("Y")],how="left",on="ORDER_NUM")
print (result[result["Flag_Value"].isnull()])
ORDER_NUM REGION DRIVER Flag_Value
3 12478965 NORTH David NaN
4 25845673 SOUTH Mani NaN
5 36789156 SOUTH Tim NaN
Or even simpler if your ORDER_NUM are unique:
print (df2.loc[~df2["ORDER_NUM"].isin(df1.loc[df1["Flag_Value"].eq("Y"),"ORDER_NUM"])])
ORDER_NUM REGION DRIVER
3 12478965 NORTH David
4 25845673 SOUTH Mani
5 36789156 SOUTH Tim

Handling duplicate data with pandas

Hello everyone, I'm having some issues with using pandas python library. Basically I'm reading csv
file with pandas and want to remove duplicates. I've tried everything and problem is still there.
import sqlite3
import pandas as pd
import numpy
connection = sqlite3.connect("test.db")
## pandas dataframe
dataframe = pd.read_csv('Countries.csv')
##dataframe.head(3)
countries = dataframe.loc[:, ['Retailer country', 'Continent']]
countries.head(6)
Output of this will be:
Retailer country Continent
-----------------------------
0 United States North America
1 Canada North America
2 Japan Asia
3 Italy Europe
4 Canada North America
5 United States North America
6 France Europe
I want to be able to drop duplicate values based on columns from
a dataframe above so I would have smth like this unique values from each country, and continent
so that desired output of this will be:
Retailer country Continent
-----------------------------
0 United States North America
1 Canada North America
2 Japan Asia
3 Italy Europe
4 France Europe
I have tried some methods mentioned there: Using pandas for duplicate values and looked around the net and realized I could use df.drop_duplicates() function, but when I use the code below and df.head(3) function it displays only one row. What can I do to get those unique rows and finally loop through them ?
countries.head(4)
country = countries['Retailer country']
continent = countries['Continent']
df = pd.DataFrame({'a':[country], 'b':[continent]})
df.head(3)
It seems like a simple group-by could solve your problem.
import pandas as pd
na = 'North America'
a = 'Asia'
e = 'Europe'
df = pd.DataFrame({'Retailer': [0, 1, 2, 3, 4, 5, 6],
'country': ['Unitied States', 'Canada', 'Japan', 'Italy', 'Canada', 'Unitied States', 'France'],
'continent': [na, na, a, e, na, na, e]})
df.groupby(['country', 'continent']).agg('count').reset_index()
The Retailer column is now showing a count of the number of times that country, continent combination occurs. You could remove this by `df = df[['country', 'continent']].

Resources