Python: how to remove footnotes when loading data, and how to select the first when there is a pair of numbers - python-3.x

I am new to python and looking for help.
resp =requests.get("https://en.wikipedia.org/wiki/World_War_II_casualties")
soup = bs.BeautifulSoup(resp.text)
table = soup.find("table", {"class": "wikitable sortable"})
deaths = []`
for row in table.findAll('tr')[1:]:
death = row.findAll('td')[5].text.strip()
deaths.append(death)
It comes out as
'30,000',
'40,400',
'',
'88,000',
'2,000',
'21,500',
'252,600',
'43,600',
'15,000,000[35]to 20,000,000[35]',
'100',
'340,000 to 355,000',
'6,000',
'3,000,000to 4,000,000',
'1,100',
'83,000',
'100,000[49]',
'85,000 to 95,000',
'600,000',
'1,000,000to 2,200,000',
'6,900,000 to 7,400,000',
...
'557,000',
'5,900,000[115] to 6,000,000[116]',
'40,000to 70,000',
'500,000[39]',
'36,000–50,000',
'11,900',
'10,000',
'20,000,000[141] to 27,000,000[142][143][144][145][146]',
'',
'2,100',
'100',
'7,600',
'200',
'450,900',
'419,400',
'1,027,000[160] to 1,700,000[159]',
'',
'70,000,000to 85,000,000']`
I want to plot a graph, but the [] footnote would completely ruin it. Many of the values are with footnotes. Is it also possible to select the first number when there is a pair in one cell? I'd appreciate if anyone of you could teach me... Thank you

You can use soup.find_next() with text=True parameter, then split/strip accordingly.
For example:
import requests
from bs4 import BeautifulSoup
url = 'https://en.wikipedia.org/wiki/World_War_II_casualties'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for tr in soup.table.select('tr:has(td)')[1:]:
tds = tr.select('td')
if not tds[0].b:
continue
name = tds[0].b.get_text(strip=True, separator=' ')
casualties = tds[5].find_next(text=True).strip()
print('{:<30} {}'.format(name, casualties.split('–')[0].split()[0] if casualties else ''))
Prints:
Albania 30,000
Australia 40,400
Austria
Belgium 88,000
Brazil 2,000
Bulgaria 21,500
Burma 252,600
Canada 43,600
China 15,000,000
Cuba 100
Czechoslovakia 340,000
Denmark 6,000
Dutch East Indies 3,000,000
Egypt 1,100
Estonia 83,000
Ethiopia 100,000
Finland 85,000
France 600,000
French Indochina 1,000,000
Germany 6,900,000
Greece 507,000
Guam 1,000
Hungary 464,000
Iceland 200
India 2,200,000
Iran 200
Iraq 700
Ireland 100
Italy 492,400
Japan 2,500,000
Korea 483,000
Latvia 250,000
Lithuania 370,000
Luxembourg 5,000
Malaya & Singapore 100,000
Malta 1,500
Mexico 100
Mongolia 300
Nauru 500
Nepal
Netherlands 210,000
Newfoundland 1,200
New Zealand 11,700
Norway 10,200
Papua and New Guinea 15,000
Philippines 557,000
Poland 5,900,000
Portuguese Timor 40,000
Romania 500,000
Ruanda-Urundi 36,000
South Africa 11,900
South Pacific Mandate 10,000
Soviet Union 20,000,000
Spain
Sweden 2,100
Switzerland 100
Thailand 7,600
Turkey 200
United Kingdom 450,900
United States 419,400
Yugoslavia 1,027,000
Approx. totals 70,000,000

Related

Full country name to country code in Dataframe

I have these kind of countries in the dataframe. There are some with full country names, there are some with alpha-2.
Country
------------------------
8836 United Kingdom
1303 ES
7688 United Kingdom
12367 FR
7884 United Kingdom
6844 United Kingdom
3706 United Kingdom
3567 UK
6238 FR
588 UK
4901 United Kingdom
568 UK
4880 United Kingdom
11284 France
1273 Spain
2719 France
1386 UK
12838 United Kingdom
868 France
1608 UK
Name: Country, dtype: object
Note: Some data in Country are empty.
How will I be able to create a new column with the alpha-2 country codes in it?
Country | Country Code
---------------------------------------
United Kingdom | UK
France | FR
FR | FR
UK | UK
Italy | IT
Spain | ES
ES | ES
...
You can try this, as already mentioned in the comment by me earlier.
import pandas as pd
df = pd.DataFrame([[1, 'UK'],[2, 'United Kingdom'],[3, 'ES'],[2, 'Spain']], columns=['id', 'Country'])
#Create copy of country column as alpha-2
df['alpha-2'] = df['Country']
#Create a look up with required values
lookup_table = {'United Kingdom':'UK', 'Spain':'ES'}
#replace the alpha-2 column with lookup values.
df = df.replace({'alpha-2':lookup_table})
print(df)
Output
You will have to define a dictionary for the replacements (or find a library that does it for you). The abbreviations look pretty close the IBAN codes to me. But the biggest stickout was United Kingdom => GB as opposed to UK in your example.
I would start with the IBAN codes and define a big dictionary like this:
mappings = {
"Afghanistan": "AF",
"Albania": "AL",
...
}
df["Country Code"] = df["Country"].replace(mappings)

Subtotal for each level in Pivot table

I'm trying to create a pivot table that has, besides the general total, a subtotal between each row level.
I created my df.
import pandas as pd
df = pd.DataFrame(
np.array([['SOUTH AMERICA', 'BRAZIL', 'SP', 500],
['SOUTH AMERICA', 'BRAZIL', 'RJ', 200],
['SOUTH AMERICA', 'BRAZIL', 'MG', 150],
['SOUTH AMERICA', 'ARGENTINA', 'BA', 180],
['SOUTH AMERICA', 'ARGENTINA', 'CO', 300],
['EUROPE', 'SPAIN', 'MA', 400],
['EUROPE', 'SPAIN', 'BA', 110],
['EUROPE', 'FRANCE', 'PA', 320],
['EUROPE', 'FRANCE', 'CA', 100],
['EUROPE', 'FRANCE', 'LY', 80]], dtype=object),
columns=["CONTINENT", "COUNTRY","LOCATION","POPULATION"]
)
After that i created my pivot table as shown bellow
table = pd.pivot_table(df, values=['POPULATION'], index=['CONTINENT', 'COUNTRY', 'LOCATION'], fill_value=0, aggfunc=np.sum, dropna=True)
table
To do the subtotal i started sum CONTINENT level
tab_tots = table.groupby(level='CONTINENT').sum()
tab_tots.index = [tab_tots.index, ['Total'] * len(tab_tots)]
And concatenated with my first pivot to get subtotal.
pd.concat([table, tab_tots]).sort_index()
And got it:
How can i get the values separated in level like the first table?
I'm not finding a way to do this.
With margins=True, and need change little bit of your pivot index and columns .
newdf=pd.pivot_table(df, index=['CONTINENT'],values=['POPULATION'], columns=[ 'COUNTRY', 'LOCATION'], aggfunc=np.sum, dropna=True,margins=True)
newdf.drop('All').stack([1,2])
Out[132]:
POPULATION
CONTINENT COUNTRY LOCATION
EUROPE All 1010.0
FRANCE CA 100.0
LY 80.0
PA 320.0
SPAIN BA 110.0
MA 400.0
SOUTH AMERICA ARGENTINA BA 180.0
CO 300.0
All 1330.0
BRAZIL MG 150.0
RJ 200.0
SP 500.0
IIUC:
contotal = table.groupby(level=0).sum().assign(COUNTRY='TOTAL', LOCATION='').set_index(['COUNTRY','LOCATION'], append=True)
coutotal = table.groupby(level=[0,1]).sum().assign(LOCATION='TOTAL').set_index(['LOCATION'], append=True)
df_out = (pd.concat([table,contotal,coutotal]).sort_index())
df_out
Output:
POPULATION
CONTINENT COUNTRY LOCATION
EUROPE FRANCE CA 100
LY 80
PA 320
TOTAL 500
SPAIN BA 110
MA 400
TOTAL 510
TOTAL 1010
SOUTH AMERICA ARGENTINA BA 180
CO 300
TOTAL 480
BRAZIL MG 150
RJ 200
SP 500
TOTAL 850
TOTAL 1330
You want to do something like this instead
tab_tots.index = [tab_tots.index, ['Total'] * len(tab_tots), [''] * len(tab_tots)]
Which gives the following I think you are after
In [277]: pd.concat([table, tab_tots]).sort_index()
Out[277]:
POPULATION
CONTINENT COUNTRY LOCATION
EUROPE FRANCE CA 100
LY 80
PA 320
SPAIN BA 110
MA 400
Total 1010
SOUTH AMERICA ARGENTINA BA 180
CO 300
BRAZIL MG 150
RJ 200
SP 500
Total 1330
Note that although this solves your problem, it isn't good programming stylistically. You have inconsistent logic on your summed levels.
This makes sense for a UI interface but if you are using the data it would be better to perhaps use
tab_tots.index = [tab_tots.index, ['All'] * len(tab_tots), ['All'] * len(tab_tots)]
This follows SQL table logic and will give you
In [289]: pd.concat([table, tab_tots]).sort_index()
Out[289]:
POPULATION
CONTINENT COUNTRY LOCATION
EUROPE All All 1010
FRANCE CA 100
LY 80
PA 320
SPAIN BA 110
MA 400
SOUTH AMERICA ARGENTINA BA 180
CO 300
All All 1330
BRAZIL MG 150
RJ 200
SP 500

How to replace dataframe columns country name with continent?

I have Dataframe like this.
problem.head(30)
Out[25]:
Country
0 Sweden
1 Africa
2 Africa
3 Africa
4 Africa
5 Germany
6 Germany
7 Germany
8 Germany
9 UK
10 Germany
11 Germany
12 Germany
13 Germany
14 Sweden
15 Sweden
16 Africa
17 Africa
18 Africa
19 Africa
20 Africa
21 Africa
22 Africa
23 Africa
24 Africa
25 Africa
26 Pakistan
27 Pakistan
28 ZA
29 ZA
Now i want to replace the country name with the continent name. So the country name will be replace with its continent name.
What i did is, i have created all the Continent array(which is there in my data frame, i have 56 country),
asia = ['Afghanistan', 'Bahrain', 'United Arab Emirates','Saudi Arabia', 'Kuwait', 'Qatar', 'Oman',
'Sultanate of Oman','Lebanon', 'Iraq', 'Yemen', 'Pakistan', 'Lebanon', 'Philippines', 'Jordan']
europe = ['Germany','Spain', 'France', 'Italy', 'Netherlands', 'Norway', 'Sweden','Czech Republic', 'Finland',
'Denmark', 'Czech Republic', 'Switzerland', 'UK', 'UK&I', 'Poland', 'Greece','Austria',
'Bulgaria', 'Hungary', 'Luxembourg', 'Romania' , 'Slovakia', 'Estonia', 'Slovenia','Portugal',
'Croatia', 'Lithuania', 'Latvia','Serbia', 'Estonia', 'ME', 'Iceland' ]
africa = ['Morocco', 'Tunisia', 'Africa', 'ZA', 'Kenya']
other = ['USA', 'Australia', 'Reunion', 'Faroe Islands']
Now trying to replace using
dataframe['Continent'] = dataframe['Country'].replace(asia, 'Asia', regex=True)
where asia is my list name and Asia is text to be replace. But is not working
it only work for
dataframe['Continent'] = dataframe['Country'].replace(np.nan, 'Asia', regex=True)
So, help will be appreciated
Using apply with a custom function.
Demo:
import pandas as pd
asia = ['Afghanistan', 'Bahrain', 'United Arab Emirates','Saudi Arabia', 'Kuwait', 'Qatar', 'Oman',
'Sultanate of Oman','Lebanon', 'Iraq', 'Yemen', 'Pakistan', 'Lebanon', 'Philippines', 'Jordan']
europe = ['Germany','Spain', 'France', 'Italy', 'Netherlands', 'Norway', 'Sweden','Czech Republic', 'Finland',
'Denmark', 'Czech Republic', 'Switzerland', 'UK', 'UK&I', 'Poland', 'Greece','Austria',
'Bulgaria', 'Hungary', 'Luxembourg', 'Romania' , 'Slovakia', 'Estonia', 'Slovenia','Portugal',
'Croatia', 'Lithuania', 'Latvia','Serbia', 'Estonia', 'ME', 'Iceland' ]
africa = ['Morocco', 'Tunisia', 'Africa', 'ZA', 'Kenya']
other = ['USA', 'Australia', 'Reunion', 'Faroe Islands']
def GetConti(counry):
if counry in asia:
return "Asia"
elif counry in europe:
return "Europe"
elif counry in africa:
return "Africa"
else:
return "other"
df = pd.DataFrame({"Country": ["Sweden", "Africa", "Africa", "Germany", "Germany", "UK","Pakistan"]})
df['Continent'] = df['Country'].apply(lambda x: GetConti(x))
print(df)
Output:
Country Continent
0 Sweden Europe
1 Africa Africa
2 Africa Africa
3 Germany Europe
4 Germany Europe
5 UK Europe
6 Pakistan Asia
It would be better to store your country-to-continent map as a dictionary rather than four separate lists. You can do this as follows, starting with your current lists:
continents = {country: 'Asia' for country in asia}
continents.update({country: 'Europe' for country in europe})
continents.update({country: 'Africa' for country in africa})
continents.update({country: 'Other' for country in other})
Then you can use the Pandas map function to map continents to countries:
dataframe['Continent'] = dataframe['Country'].map(continents)

Plotting UK Districts, Postcode Areas and Regions

I am wondering if we can do similar choropleth as below with UK District, Postcode Area and Region map.
It would be great if you could show an example for UK choropleths.
Geographic shape files can be downloaded from http://martinjc.github.io/UK-GeoJSON/
state_geo = os.path.join('data', 'us-states.json')
state_unemployment = os.path.join('data', 'US_Unemployment_Oct2012.csv')
state_data = pd.read_csv(state_unemployment)
j1 = pd.read_json(state_geo)
from branca.utilities import split_six
threshold_scale = split_six(state_data['Unemployment'])
m = folium.Map(location=[48, -102], zoom_start=3)
m.choropleth(
geo_path=state_geo,
geo_str='choropleth',
data=state_data,
columns=['State', 'Unemployment'],
key_on='feature.id',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Unemployment Rate (%)'
)
m
m.save('choropleth.html')
This is what I did.
First, collect your data. I used www.nomisweb.co.uk to collect employment rates for the main regions:
North East (England)
North West (England)
Yorkshire and The Humber
East Midlands (England)
West Midlands (England)
East of England
London South East (England)
South West (England)
Wales Scotland
Northern Ireland
I saved this dataset as UKEmploymentData.csv. Note that you will have to change the region names to match the geo data id's.
Then I followed what you posted using the NUTS data from the ONS geoportal.
import pandas as pd
import os
import json
# read in population data
df = pd.read_csv('UKEmploymentData.csv')
import folium
from branca.utilities import split_six
state_geo = 'http://geoportal1-ons.opendata.arcgis.com/datasets/01fd6b2d7600446d8af768005992f76a_4.geojson'
m = folium.Map(location=[55, 4], zoom_start=5)
m.choropleth(
geo_data=state_geo,
data=df,
columns=['region', 'Total in employment - aged 16 and over'],
key_on='feature.properties.nuts118nm',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Employment Rate (%)',
highlight=True
)
m

Dynamically fusion rows cells with same values in Excel

In a datasheet with automatic filters, I have this (values and columns names are for example) :
Continent Country City Street
----------------------------------------------------------
Asia Vietnam Hanoi egdsqgdfgdsfg
Asia Vietnam Hanoi fhfdghdfdh
Asia Vietnam Hanoi dfhdfhfdhfdhfdhfdh
Asia Vietnam Saigon ggdsfgfdsdgsdfgdf
Asia Vietnam Hue qsdfqsfqsdf
Asia China Beijing qegfqsddfgdf
Asia China Canton sdgsdfgsdgsdg
Asia China Canton tjgjfgj
Asia China Canton tzeryrty
Asia Japan Tokyo ertsegsgsdfdg
Asia Japan Kyoto qegdgdfgdfgdf
Asia Japan Sapporo gsdgfdgsgsdfgf
Europa France Paris qfqsdfdsqfgsdfgsg
Europa France Toulon qgrhrgqzfqzetzeqrr
Europa France Lyon pàjhçuhàçuh
Europa Italy Rome qrgfqegfgdfg
Europa Italy Rome qergqegsdfgsdfgdsg
I would like this to be displayed like this, with rows fusionned dynamically if filters changes
Continent Country City Street
----------------------------------------------------------
egdsqgdfgdsfg
Hanoi fhfdghdfdh
Vietnam dfhdfhfdhfdhfdhfdh
Saigon ggdsfgfdsdgsdfgdf
Hue qsdfqsfqsdf
---
Asia Beijing qegfqsddfgdf
China sdgsdfgsdgsdg
Canton tjgjfgj
tzeryrty
---
Tokyo ertsegsgsdfdg
Japan Kyoto qegdgdfgdfgdf
Sapporo gsdgfdgsgsdfgf
---
Paris qfqsdfdsqfgsdfgsg
France Toulon qgrhrgqzfqzetzeqrr
Europa Lyon pàjhçuhàçuh
Italy Rome qrgfqegfgdfg
qergqegsdfgsdfgdsg
Is macro mandatory for this ?
I don't want to merge values in Street column. I want to keep all lines. I just want to work on the first column display to avoid having long series of same values.
You can also setup a PivotTable - this would look like this:
Just go to "insert->pivottable" and select your given data as input and create the pivottable as new worksheet ;)
Put all field in the "rows" section, remove any subsum or sum calculations.
Because you don't have any values to sum up, you should just hide those columns, to get a clear view.
If you want to use a Function.
You can do it like this:
=IF(MATCH(Tabelle1!A1;(Tabelle1!A:A);0)=ROW();Tabelle1!A1;"")
Insert this Formula in a other Sheet.

Resources