I am trying to load a csv file from the following URL into a dataframe using Python 3.5 and Pandas:
link = "http://api.worldbank.org/v2/en/indicator/NY.GDP.MKTP.CD?downloadformat=csv"
The csv file (API_NY.GDP.MKTP.CD_DS2_en_csv_v2.csv) is inside of a zip file. My try:
import urllib.request
urllib.request.urlretrieve(link, "GDP.zip")
import zipfile
compressed_file = zipfile.ZipFile('GDP.zip')
csv_file = compressed_file.open('API_NY.GDP.MKTP.CD_DS2_en_csv_v2.csv')
GDP = pd.read_csv(csv_file)
But when reading it, I got the error "pandas.io.common.CParserError: Error tokenizing data. C error: Expected 3 fields in line 5, saw 62".
Any idea?
I think you need parameter skiprows, because csv header is in row 5:
GDP = pd.read_csv(csv_file, skiprows=4)
print (GDP.head())
Country Name Country Code Indicator Name Indicator Code 1960 \
0 Aruba ABW GDP (current US$) NY.GDP.MKTP.CD NaN
1 Andorra AND GDP (current US$) NY.GDP.MKTP.CD NaN
2 Afghanistan AFG GDP (current US$) NY.GDP.MKTP.CD 5.377778e+08
3 Angola AGO GDP (current US$) NY.GDP.MKTP.CD NaN
4 Albania ALB GDP (current US$) NY.GDP.MKTP.CD NaN
1961 1962 1963 1964 1965 \
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 5.488889e+08 5.466667e+08 7.511112e+08 8.000000e+08 1.006667e+09
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
2008 2009 2010 2011 \
0 ... 2.791961e+09 2.498933e+09 2.467704e+09 2.584464e+09
1 ... 4.001201e+09 3.650083e+09 3.346517e+09 3.427023e+09
2 ... 1.019053e+10 1.248694e+10 1.593680e+10 1.793024e+10
3 ... 8.417803e+10 7.549238e+10 8.247091e+10 1.041159e+11
4 ... 1.288135e+10 1.204421e+10 1.192695e+10 1.289087e+10
2012 2013 2014 2015 2016 Unnamed: 61
0 NaN NaN NaN NaN NaN NaN
1 3.146152e+09 3.248925e+09 NaN NaN NaN NaN
2 2.053654e+10 2.004633e+10 2.005019e+10 1.933129e+10 NaN NaN
3 1.153984e+11 1.249121e+11 1.267769e+11 1.026269e+11 NaN NaN
4 1.231978e+10 1.278103e+10 1.321986e+10 1.139839e+10 NaN NaN
Related
Given a dataframe df as follows:
date value 20211003 20211010 20211017
0 2021-9-19 3613.9663 NaN NaN NaN
1 2021-9-26 3613.0673 NaN NaN NaN
2 2021-10-3 3568.1668 NaN NaN NaN
3 2021-10-10 3592.1666 3510.221000 NaN NaN
4 2021-10-17 3572.3662 3465.737012 3534.220800 NaN
5 2021-10-24 3582.6036 3479.107035 3539.856801 3514.420400
6 2021-10-31 3547.3361 3421.161235 3481.911001 3456.474600
7 2021-11-7 3491.5677 3370.140147 3439.284539 3416.621024
8 2021-11-14 3539.1002 3319.289523 3391.930037 3370.079953
9 2021-11-21 3560.3734 3261.343723 3333.984237 3312.134153
10 2021-11-28 3564.0894 3255.328902 3338.967086 3305.054247
11 2021-12-5 3607.4320 3313.274702 3396.912886 3363.000047
12 2021-12-12 3666.3479 3371.220502 3450.172564 3412.234440
13 2021-12-19 3632.3638 NaN 3466.930383 3428.683490
14 2021-12-26 3618.0535 NaN NaN 3370.737690
Let's say the columns after value column (20211003, 20211010 and 20211017) are rolling forecast result of value, instead of 10 values for each column, I'll need to keep 3 values only. Here is the slicing rule: from left to right, from bottom to top to keep 3 values for each date column, so row 2021-11-28 from column 20211003 will be the starting point, and then increase day by day. The expected result will like this:
date value 20211003 20211010 20211017
0 2021-9-19 3613.9663 NaN NaN NaN
1 2021-9-26 3613.0673 NaN NaN NaN
2 2021-10-3 3568.1668 NaN NaN NaN
3 2021-10-10 3592.1666 NaN NaN NaN
4 2021-10-17 3572.3662 NaN NaN NaN
5 2021-10-24 3582.6036 NaN NaN NaN
6 2021-10-31 3547.3361 NaN NaN NaN
7 2021-11-7 3491.5677 NaN NaN NaN
8 2021-11-14 3539.1002 NaN NaN NaN
9 2021-11-21 3560.3734 NaN NaN NaN
10 2021-11-28 3564.0894 3255.328902 NaN NaN
11 2021-12-5 3607.4320 3313.274702 3396.912886 NaN
12 2021-12-12 3666.3479 3371.220502 3450.172564 3412.23444
13 2021-12-19 3632.3638 NaN 3466.930383 3428.68349
14 2021-12-26 3618.0535 NaN NaN 3370.73769
How could I achieve that in Pandas? Thanks.
Reference:
Iterate over multiple columns and replace the values in these columns after a row (increment) with null values
df.iloc[:, :2].join(df.iloc[:, 2:].apply(lambda x:x.dropna().tail(3)))
date value 20211003 20211010 20211017
0 2021-9-19 3613.9663 NaN NaN NaN
1 2021-9-26 3613.0673 NaN NaN NaN
2 2021-10-3 3568.1668 NaN NaN NaN
3 2021-10-10 3592.1666 NaN NaN NaN
4 2021-10-17 3572.3662 NaN NaN NaN
5 2021-10-24 3582.6036 NaN NaN NaN
6 2021-10-31 3547.3361 NaN NaN NaN
7 2021-11-7 3491.5677 NaN NaN NaN
8 2021-11-14 3539.1002 NaN NaN NaN
9 2021-11-21 3560.3734 NaN NaN NaN
10 2021-11-28 3564.0894 3255.328902 NaN NaN
11 2021-12-5 3607.4320 3313.274702 3396.912886 NaN
12 2021-12-12 3666.3479 3371.220502 3450.172564 3412.23444
13 2021-12-19 3632.3638 NaN 3466.930383 3428.68349
14 2021-12-26 3618.0535 NaN NaN 3370.73769
I am trying to read a data file using pandas,
import pandas as pd
file_path = "/home/gopakumar/Downloads/test.DAT"
df = pd.read_csv(file_path, header=None, sep=';', engine='python',encoding="windows-1252")
and getting the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 610, in read_csv
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 468, in _read
return parser.read(nrows)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 1057, in read
index, columns, col_dict = self._engine.read(nrows)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 2496, in read
alldata = self._rows_to_cols(content)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 3189, in _rows_to_cols
self._alert_malformed(msg, row_num + 1)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 2948, in _alert_malformed
raise ParserError(msg)
pandas.errors.ParserError: Expected 5 fields in line 3, saw 6
From the error description, I understand that the file has a different number of columns in each row, but this is how the file is, and is there any way to read such a file with a different number of columns in each row.
Following is a sample file:
0050;V2019.8.0.0;V2019.8.0.0;20200407;184821
0070;;7;0;7
0080;11;50;Abcd.pdf;Abcd;C:\Daten\Ablage\
0090;1;H;Holz;0;0;0;Holz;;;Holz
0090;1;Z;Abcdör;0;0;0;Abcd;;;Abcd
0090;1;N;Abcd;0;0;0;Abcd;;;Abcd
If you use header = None all rows must have same no of cols like below:
data = """
0050;V2019.8.0.0;V2019.8.0.0;20200407;184821;;;;;;;;;;;
0070;;7;0;7
0080;11;50;Abcd.pdf;Abcd;C:\Daten\Ablage\
0090;1;H;Holz;0;0;0;Holz;;;Holz
0090;1;Z;Abcdör;0;0;0;Abcd;;;Abcd
0090;1;N;Abcd;0;0;0;Abcd;;;Abcd
"""
df = pd.read_csv(StringIO(data), header=None, sep=';')
Output:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 50 V2019.8.0.0 V2019.8.0.0 20200407 184821 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 70 NaN 7 0 7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 80 11 50 Abcd.pdf Abcd C:\Daten\Ablage0090 1.0 H Holz 0.0 0 0.0 Holz NaN NaN Holz
3 90 1 Z Abcdör 0 0 0.0 Abcd NaN NaN Abcd NaN NaN NaN NaN NaN
4 90 1 N Abcd 0 0 0.0 Abcd NaN NaN Abcd NaN NaN NaN NaN NaN
Or if you know how many columns are there in the data you can also use:
cols = [f'col_{i}' for i in range(0,16)]
df = pd.read_csv(StringIO(data), names=cols, sep=';')
Output:
col_0 col_1 col_2 col_3 col_4 col_5 col_6 col_7 col_8 col_9 col_10 col_11 col_12 col_13 col_14 col_15
0 50 V2019.8.0.0 V2019.8.0.0 20200407 184821 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 70 NaN 7 0 7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 80 11 50 Abcd.pdf Abcd C:\Daten\Ablage0090 1.0 H Holz 0.0 0 0.0 Holz NaN NaN Holz
3 90 1 Z Abcdör 0 0 0.0 Abcd NaN NaN Abcd NaN NaN NaN NaN NaN
4 90 1 N Abcd 0 0 0.0 Abcd NaN NaN Abcd NaN NaN NaN NaN NaN
I am writing a script to scrape a series of tables in a pdf into python using tabula-py.
This is fine. I do get the data. But the data is multi-line, and useless in reality.
I would like to merge the rows where the first column (Tag is not NaN).
I was about to put the whole thing in an iterator, and do it manually, but I realize that pandas is a powerful tool, but I don't have the pandas vocabulary to search for the right tool. Any help is much appreciated.
My Code
filename='tags.pdf'
tagTableStart=2 #784
tagTableEnd=39 #822
tableHeadings = ['Tag','Item','Length','Description','Value']
pageRange = "%d-%d" % (tagTableStart, tagTableEnd)
print ("Scanning pages %s" % pageRange)
# extract all the tables in that page range
tables = tabula.read_pdf(filename, pages=pageRange)
How The data is stored in the DataFrame:
(Empty fields are NaN)
Tag
Item
Length
Description
Value
AA
Some
2
Very Very
Text
Very long
Value
AB
More
4
Other Very
aaaa
Text
Very long
bbbb
Value
cccc
How I want the data:
This is almost as it is displayed in the pdf (I couldn't figure out how to make text multi line in SO editor)
Tag
Item
Length
Description
Value
AA
Some\nText
2
Very Very\nVery long\nValue
AB
More\nText
4
Other Very\nVery long\n Value
aaaa\nbbbb\ncccc
Actual sample output (obfuscated)
Tag Item Length Description Value
0 AA PYTHROM-PARTY-I 20 Some Current defined values are :
1 NaN NaN NaN texst Byte1:
2 NaN NaN NaN NaN C
3 NaN NaN NaN NaN DD
4 NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN DD
6 NaN NaN NaN NaN DD
7 NaN NaN NaN NaN DD
8 NaN NaN NaN NaN NaN
9 NaN NaN NaN NaN B :
10 NaN NaN NaN NaN JLSAFISFLIHAJSLIhdsflhdliugdyg89o7fgyfd
11 NaN NaN NaN NaN ISFLIHAJSLIhdsflhdliugdyg89o7fgyfd
12 NaN NaN NaN NaN upon ISFLIHAJSLIhdsflhdliugdyg89o7fgy
13 NaN NaN NaN NaN asdsadct on the dasdsaf the
14 NaN NaN NaN NaN actsdfion.
15 NaN NaN NaN NaN NaN
16 NaN NaN NaN NaN SLKJDBFDLFKJBDSFLIUFy7dfsdfiuojewv
17 NaN NaN NaN NaN csdfgfdgfd.
18 NaN NaN NaN NaN NaN
19 NaN NaN NaN NaN fgfdgdfgsdfgfdsgdfsgfdgfdsgsdfgfdg
20 BB PRESENT-AMOUNT-BOX 11 Lorem Ipsum NaN
21 CC SOME-OTHER-VALUE 1 sdlkfgsdsfsdf 1
22 NaN NaN NaN device NaN
23 NaN NaN NaN ueghkjfgdsfdskjfhgsdfsdfkjdshfgsfliuaew8979vfhsdf NaN
24 NaN NaN NaN dshf87hsdfe4ir8hod9 NaN
Create groups from ID columns then join each rows:
agg_func = dict(zip(df.columns, [lambda s: '\n'.join(s).strip()] * len(df.columns)))
out = df.fillna('').groupby(df['Tag'].ffill(), as_index=False).agg(agg_func)
Output:
>>> out
Tag Item Length Description Value
0 AA Some\nText 2 Very Very\nVery long\nValue
1 AB More\nText 4 Other Very\nVery long\nValue aaaa\nbbbb\ncccc
agg_func is equivalent to write:
{'Tag': lambda s: '\n'.join(s).strip(),
'Item': lambda s: '\n'.join(s).strip(),
'Length': lambda s: '\n'.join(s).strip(),
'Description': lambda s: '\n'.join(s).strip(),
'Value': lambda s: '\n'.join(s).strip()}
I am trying to do an index match in 2 data set but having trouble. Here is an example of what I am trying to do. I want to fill in column "a", "b", "c" that are empty in df with the df2 data where "Machine", "Year", and "Order Type".
The first dataframe lets call this one "df"
Machine Year Cost a b c
0 abc 2014 5500 nan nan nan
1 abc 2015 89 nan nan nan
2 abc 2016 600 nan nan nan
3 abc 2017 250 nan nan nan
4 abc 2018 2100 nan nan nan
5 abc 2019 590 nan nan nan
6 dcb 2020 3000 nan nan nan
7 dcb 2021 100 nan nan nan
The second data set is called "df2"
Order Type Machine Year Total Count
0 a abc 2014 1
1 b abc 2014 1
2 c abc 2014 2
4 c dcb 2015 4
3 a abc 2016 3
Final Output is:
Machine Year Cost a b c
0 abc 2014 5500 1 1 2
1 abc 2015 89 nan nan nan
2 abc 2016 600 3 nan nan
3 abc 2017 250 nan nan nan
4 abc 2018 2100 nan nan nan
5 abc 2019 590 1 nan nan
6 dcb 2014 3000 nan nan 4
7 dcb 2015 100 nan nan nan
Thanks for help in advance
Consider DataFrame.pivot to reshape df2 to merge with df1.
final_df = (
df1.reindex(["Machine", "Type", "Cost"], axis=True)
.merge(
df.pivot(
index=["Machine", "Year"],
columns="Order Type",
values="Total Count"
).reset_index(),
on = ["Machine", "Year"]
)
)
Given a dataset as follows:
city value1 March April May value2 Jun Jul Aut
0 bj 12 NaN NaN NaN 15 NaN NaN NaN
1 sh 8 NaN NaN NaN 13 NaN NaN NaN
2 gz 9 NaN NaN NaN 9 NaN NaN NaN
3 sz 6 NaN NaN NaN 16 NaN NaN NaN
I would like to fill value1 to randomly select one column from 'March', 'April', 'May', also fill value2 to one column randomly selected from 'Jun', 'Jul', 'Aut'.
Output desired:
city value1 March April May value2 Jun Jul Aut
0 bj 12 NaN 12.0 NaN 15 NaN 15.0 NaN
1 sh 8 8.0 NaN NaN 13 NaN NaN 13.0
2 gz 9 NaN NaN 9.0 9 NaN 9.0 NaN
3 sz 6 NaN 6.0 NaN 16 16.0 NaN NaN
How could I do that in Python? Thanks.
Here is one way by defining a function which randomly selects the indices from the slice of dataframe as defined by the passed cols then fills the corresponding values from the value column (val_col) passed to the function:
def fill(df, val_col, cols):
i = np.random.choice(len(cols), len(df))
vals = df[cols].to_numpy()
vals[range(len(df)), i] = list(df[val_col])
return df.assign(**dict(zip(cols, vals.T)))
>>> df = fill(df, 'value1', ['March', 'April', 'May'])
>>> df
city value1 March April May value2 Jun Jul Aut
0 bj 12 12.0 NaN NaN 15 NaN NaN NaN
1 sh 8 NaN NaN 8.0 13 NaN NaN NaN
2 gz 9 NaN 9.0 NaN 9 NaN NaN NaN
3 sz 6 NaN 6.0 NaN 16 NaN NaN NaN
>>> df = fill(df, 'value2', ['Jun', 'Jul', 'Aut'])
>>> df
city value1 March April May value2 Jun Jul Aut
0 bj 12 NaN NaN 12.0 15 NaN NaN 15.0
1 sh 8 NaN NaN 8.0 13 13.0 NaN NaN
2 gz 9 NaN NaN 9.0 9 NaN NaN 9.0
3 sz 6 NaN 6.0 NaN 16 NaN NaN 16.0