pdblp.BCon.bdh usage. inserting an array as the "list" argument - blpapi

The usage for con.bdh is con.bdh('SPY US Equity', ['PX_LAST', 'VOLUME'],
'20150629', '20150630', longdata=True)
I would like to get PX_LAST and VOLUME for a list of securities that I have on an array (strings with tickers). When I try to substitute SPY US Equity with the array "arrtickers" or [list(arrtickers)] I get the following error:
...eidData[] = {
}
sequenceNumber = 0
securityError = {
source = "3920::bbdbh4"
code = 15
category = "BAD_SEC"
message = "Security key is too longInvalid Security [nid:3920] "
subcategory = "INVALID_SECURITY"
}
fieldExceptions[] = {
}
fieldData[] = {
}}}
Am I using the correct syntax?

Without posting a reproducible example this is just a guess, but as the error message in your snippet suggests this is likely because you are querying for an invalid security. Array syntax should work. For example the following works fine
In [1]: import pdblp
...: con = pdblp.BCon().start()
...: con.bdh(['SPY US Equity', 'IBM US Equity'], ['PX_LAST', 'VOLUME'],
'20150629', '20150630', longdata=True)
Out[1]
date ticker field value
0 2015-06-29 SPY US Equity PX_LAST 2.054200e+02
1 2015-06-29 SPY US Equity VOLUME 2.026213e+08
2 2015-06-30 SPY US Equity PX_LAST 2.058500e+02
3 2015-06-30 SPY US Equity VOLUME 1.829251e+08
4 2015-06-29 IBM US Equity PX_LAST 1.629700e+02
5 2015-06-29 IBM US Equity VOLUME 3.314684e+06
6 2015-06-30 IBM US Equity PX_LAST 1.626600e+02
7 2015-06-30 IBM US Equity VOLUME 3.597288e+06
Whereas this does not
In [2]: con.bdh(['SPY US Equity', 'NOT_A_SECURITY Equity'], ['PX_LAST', 'VOLUME'],
'20150629', '20150630', longdata=True)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-f23344f8a6b3> in <module>()
----> 1 con.bdh(['SPY US Equity', 'NOT_A_SECURITY Equity'], ['PX_LAST', 'VOLUME'], '20150629', '20150630', longdata=True)
~/Projects/pdblp/pdblp/pdblp.py in bdh(self, tickers, flds, start_date, end_date, elms, ovrds, longdata)
268
269 data = self._bdh_list(tickers, flds, start_date, end_date,
--> 270 elms, ovrds)
271
272 df = pd.DataFrame(data, columns=["date", "ticker", "field", "value"])
~/Projects/pdblp/pdblp/pdblp.py in _bdh_list(self, tickers, flds, start_date, end_date, elms, ovrds)
305 .numValues() > 0)
306 if has_security_error or has_field_exception:
--> 307 raise ValueError(msg)
308 ticker = (msg.getElement('securityData')
309 .getElement('security').getValue())
ValueError: HistoricalDataResponse = {
securityData = {
security = "NOT_A_SECURITY Equity"
eidData[] = {
}
sequenceNumber = 1
securityError = {
source = "139::bbdbh3"
code = 15
category = "BAD_SEC"
message = "Unknown/Invalid securityInvalid Security [nid:139] "
subcategory = "INVALID_SECURITY"
}
fieldExceptions[] = {
}
fieldData[] = {
}
}
}

thanks #mgilbert. i ended up creating a list and adding all the tickers to that list.

Related

How to fix AttributeError: type object 'list' has no attribute 'find'"?

from cgitb import text
from bs4 import BeautifulSoup
import requests
website = 'https://www.marketplacehomes.com/rent-a-home/'
result = requests.get(website)
content = result.text
soup = BeautifulSoup(content, 'html.parser')
lists = soup.find_all('div', class_=('tt-rental-row'))
for list in lists:
location = list.find('span', class_="renta;-adress")
beds = list.find('span', class_="renta;-beds")
baths = list.find('span', class_="renta;-beds")
availability = list.find('span', class_="rental-date-available")
info = [location, beds, baths, availability]
print(info)
If I try to run the last line of code, I get:
"IndentationError: expected an indented block"
If I try to run each indentation separately I get:
">>> location = list.find('span', class_="renta;-adress")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'list' has no attribute 'find'"
I'm new to Python and I'm kinda stuck, can anyone please help me?
Note: Your code never runs the for-loop cause your selection never matches the elements in HTML. They are generated dynamically based on data from another ressource and requests do not render websites like a browser, it only uses static contents from response.
Be aware not to use built-in keywords they will cause errors, especialy in your case list.find() will raise one cause the type object 'list' do not has an attribute called find. You could simply check these things using type()
type(soup)
-> its a bs4.BeautifulSoup
type(soup.find_all('div', class_=('tt-rental-row')))
-> its a bs4.element.ResultSet
type(list)
-> its a type
So how to get your goal?
You could also use pandas to directly create a DataFrame and slice it to your needs:
import pandas as pd
pd.read_json('https://app.tenantturner.com/listings-json/2679')
Output:
id dateActivated latitude longitude address city state zip photo title ... baths dateAvailable rentAmount acceptPets applyUrl btnUrl btnText virtualTour propertyType enableWaitlist
0 83600 8/22/2022 35.750499 -86.393972 4481 Jack Faulk St Murfreesboro TN 37127 https://ttimages.blob.core.windows.net/propert... 4481 Jack Faulk St ... 2.0 Now 2195 cats, small dogs, large dogs https://app.propertyware.com/pw/application/#/... https://app.tenantturner.com/qualify/4481-jack... Schedule Viewing None Single Family False
1 100422 8/31/2022 30.277607 -95.472842 213 Skybranch Court Conroe TX 77304 https://ttimages.blob.core.windows.net/propert... 213 Skybranch Court ... 2.5 Now 2100 cats, small dogs, large dogs https://app.propertyware.com/pw/application/#/... https://app.tenantturner.com/qualify/213-skybr... Schedule Viewing None Condo Unit False
2 106976 7/27/2022 28.274720 -82.298077 8127 Olive Brook Dr Wesley Chapel FL 33545 https://ttimages.blob.core.windows.net/propert... 8127 Olive Brook Dr ... 2.0 Now 2650 no pets https://app.propertyware.com/pw/application/#/... https://app.tenantturner.com/qualify/8127-oliv... Schedule Viewing None Single Family False
3 116188 8/15/2022 42.624023 -83.144614 735 Grace Ave Rochester Hills MI 48307 https://ttimages.blob.core.windows.net/propert... 735 Grace Ave ... 2.0 Now 1600 cats, small dogs, large dogs https://app.propertyware.com/pw/application/#/... https://app.tenantturner.com/qualify/735-grace... Schedule Viewing None Single Family False
4 126846 8/22/2022 32.046455 -81.071181 1810 E 41st St Savannah GA 31404 https://ttimages.blob.core.windows.net/propert... 1810 E 41st St ... 1.0 Now 1395 small dogs https://app.propertyware.com/pw/application/#/... https://app.tenantturner.com/qualify/1810-e-41... Schedule Viewing None Single Family True
...
91 rows × 22 columns
Example:
To show only specifc columns, simply pass a list of there names.
import pandas as pd
pd.read_json('https://app.tenantturner.com/listings-json/2679')[['address', 'city','state', 'zip', 'title', 'beds', 'baths','dateAvailable']]
Output
address beds baths dateAvailable
0 4481 Jack Faulk St 4 2.0 Now
1 213 Skybranch Court 3 2.5 Now
2 8127 Olive Brook Dr 3 2.0 Now
3 735 Grace Ave 3 2.0 Now
4 1810 E 41st St 3 1.0 Now
... ... ... ... ...
91 rows × 4 columns
Since the word list is a built-in keyword in python you can't use it as variable name try another name
for myList in lists:
location = myList.find('span', class_="renta;-adress")
beds = myList.find('span', class_="renta;-beds")
baths = myList.find('span', class_="renta;-beds")
availability = myList.find('span', class_="rental-date-available")
info = [location, beds, baths, availability]
print(info)

Calculation of stock values with yfinance and python

I would like to make some calculations on stock prices in Python 3 and I have installed the module yfinance.
I try to get an individual value like this:
import yfinance as yf
#define the ticker symbol
tickerSymbol = 'MSFT'
#get data on this ticker
tickerData = yf.Ticker(tickerSymbol)
#get the historical prices for this ticker
tickerDf = tickerData.history(period='1d', start='2015-1-1', end='2020-12-30')
row_date = tickerDf[tickerDf['Date']=='2020-12-30']
value = row_date.Open.item()
#see your data
print (value)
But when I run this, it says:
KeyError: 'Date'
Which is strange because when I do this, it works well and I have the column Date:
import yfinance as yf
#define the ticker symbol
tickerSymbol = 'MSFT'
#get data on this ticker
tickerData = yf.Ticker(tickerSymbol)
#get the historical prices for this ticker
tickerDf = tickerData.history(period='1d', start='2015-1-1', end='2020-12-30')
#row_date = tickerDf[tickerDf['Date']=='2020-12-30']
#value = row_date.Open.item()
#see your data
print (tickerDf)
I get the following result:
G:\python> python test.py
Open High Low Close Volume Dividends Stock Splits
Date
2014-12-31 41.512481 42.143207 41.263744 41.263744 21552500 0.0 0
2015-01-02 41.450302 42.125444 41.343701 41.539135 27913900 0.0 0
2015-01-05 41.192689 41.512495 41.086088 41.157158 39673900 0.0 0
2015-01-06 41.201567 41.530255 40.455355 40.553074 36447900 0.0 0
2015-01-07 40.846223 41.272629 40.410934 41.068310 29114100 0.0 0
... ... ... ... ... ... ... ...
2020-12-22 222.690002 225.630005 221.850006 223.940002 22612200 0.0 0
2020-12-23 223.110001 223.559998 220.800003 221.020004 18699600 0.0 0
2020-12-24 221.419998 223.610001 221.199997 222.750000 10550600 0.0 0
2020-12-28 224.449997 226.029999 223.020004 224.960007 17933500 0.0 0
2020-12-29 226.309998 227.179993 223.580002 224.149994 17403200 0.0 0
[1510 rows x 7 columns]
Under the hood, yfinance uses a Pandas data frame to create a Ticker. In this dataframe, Date isn't an ordinary column, but is instead a name given to the index (see line 240 in base.py of yfinance). The index column behaves differently than other columns and actually can't be referenced by name. You can access it using TickerDf.index=='2020-12-30' or by turning it into a regular column using reset_index as explained in another question. Searching through an index is faster than searching a regular column, so if you are looking through a lot of data, it will be to your advantage to leave it as an index.

how to replace stock-prices symbols in a dataframe

I would like to get the S&P 500 ['Adj Close'] column and replace the column with the corresponding stock symbol, however, I am not able to replace the dataframe columns because it gives me an error: KeyError: '5'
What I would like to achieve is to loop through all the available stocks from the list and replace the Adj Close with the stock symbol.
This is what I did:
First I have scraped the stock symbols from Wikipedia and added them to a list.
data = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
symbols = data[0] # get first column
symbols.head()
stock = symbols['Symbol'].to_list()
print(stock[0:5])
this gives me a list of stock symbols as below:
['MMM', 'ABT', 'ABBV', 'ABMD', 'ACN']
then I scraped Yahoo finance to get the daily financial data as below
stock_url = 'https://query1.finance.yahoo.com/v7/finance/download/{}?'
params = {
'range' : '1y',
'interval' : '1d',
'events' : 'history'
}
response = requests.get(stock_url.format(stock[0]), params=params)
file = StringIO(response.text)
reader = csv.reader(file)
data = list(reader)
df = pd.DataFrame(data)
stock_data = df['5']
Fix for key error
You are calling the the url using the list 'stock' and it gives a 404 response when I tried.
Call the URL with individual stock like below,
requests.get(stock_url.format(stock[0]), params=params)
Also do below, The column 5 is stored as integer instead of character. That is the reason you got 'key error'
stock_data = df[5]
I tried for stock 'MMM' - stock[0] and it prints below:
0 1 2 3 4 5 \
0 Date Open High Low Close Adj Close
1 2019-12-11 168.380005 168.839996 167.330002 168.740005 162.682480
2 2019-12-12 166.729996 170.850006 166.330002 168.559998 162.508926
3 2019-12-13 169.619995 171.119995 168.080002 168.789993 162.730667
4 2019-12-16 168.940002 170.830002 168.190002 170.750000 164.620316
.. ... ... ... ... ... ...
249 2020-12-04 172.130005 173.160004 171.539993 172.460007 172.460007
250 2020-12-07 171.720001 172.500000 169.179993 170.149994 170.149994
251 2020-12-08 169.740005 172.830002 169.699997 172.460007 172.460007
252 2020-12-09 172.669998 175.639999 171.929993 175.289993 175.289993
253 2020-12-10 174.869995 175.399994 172.690002 173.490005 173.490005
[254 rows x 7 columns]
Loop through stocks and replace Adj Close (Edited as per requirements from comments)
Code for looping through stocks and replacing Adj close with Stock symbol.
stock_url = 'https://query1.finance.yahoo.com/v7/finance/download/{}?'
params = {
'range' : '1y',
'interval' : '1d',
'events' : 'history'
}
df = pd.DataFrame()
for i in stock:
response = requests.get(stock_url.format(i), params=params)
file = io.StringIO(response.text)
reader = csv.reader(file)
data = list(reader)
df1 = pd.DataFrame(data)
df1.loc[df1[5] == 'Adj Close',5] = i
df = df.append(df1)
Tried the code for first 3 stocks and here it is:

why am i getting the same post data though i'm posting to different URL

I'm trying to scrape http://www.moneycontrol.com/stocks/histstock.php?sc_id=BPC&mycomp=BPCL
to get price data .
So i followed the following
Opened up that link and fed in the dates(daily)
chrome->inspect->Network - obtained the Form details and found out that the URL for POST
Fed in the form data and hit POST .
I have multiple tickers for which i need the data.
Eg:
'AXISBANK': 'http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=AXISBANK',
'BAJAJ-AUTO': 'http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=BPCL',
But when i run the POST i get the same output even though the URLs i'm posting to are differnt.
What could i be missing?
Output:
running for http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=AXISBANK
Date Open High Low Close Volume
244 05-01-2016 881.3 905.00 881.3 900.65 1372748
245 04-01-2016 876.2 892.45 871.7 880.80 709103
246 01-01-2016 882.0 885.60 876.9 878.75 294006
running for http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=BPCL
Date Open High Low Close Volume
244 05-01-2016 881.3 905.00 881.3 900.65 1372748
245 04-01-2016 876.2 892.45 871.7 880.80 709103
246 01-01-2016 882.0 885.60 876.9 878.75 294006
This is the code i wrote to test it.
url='http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=AXISBANK'
url2='http://www.moneycontrol.com/stocks/hist_stock_result.php?ex=N&sc_id=API&mycomp=BPCL'
import requests
import pandas as pd
from bs4 import BeautifulSoup as bs
data = {
'frm_dy':'01',
'frm_mth':'01',
'frm_yr':'2016',
'to_dy':'31',
'to_mth':'12',
'to_yr':'2016',
'hdn':'daily'
# 'x':'15',
# 'y':'14'
}
print('running for {}'.format(url))
test = requests.post(url,data=data) # Post the data
doc = bs(test.text,'html.parser')
tables = doc.find('table',{'class':'tblchart'})
tData = pd.read_html(str(tables),header=1) #You get a list
#Convert it to dataFrame
tData = tData[0].drop(columns=['(High-Low)','(Open-Close)'])
print(tData.tail(3))
import time
time.sleep(20) # Hopefully sleep works?
url = url2 # test only
print('running for {}'.format(url))
test = requests.post(url,data=data)
doc = bs(test.text,'html.parser')
tables = doc.find('table',{'class':'tblchart'})
tData = pd.read_html(str(tables),header=1) #You get a list
#Convert it to dataFrame
tData = tData[0].drop(columns=['(High-Low)','(Open-Close)'])
print(tData.tail(3))
I noticed that sc_id changed when i ran it directly from the URL vs when i looked at the 'Inspect'.
I dont know what sc_id is (sessions_ID?)
Im totally new to web scraping . SO i dont really know the gotchas or if i've hit any.
What could i be missing?
You have to set correctly the parameter sc_id= in the URL.
For AXIS Bank it's UTI10
For Bajaj Auto it's BA06
For example:
import re
import requests
import pandas as pd
from bs4 import BeautifulSoup
def get_sc_id(name, full_name):
url = 'https://www.moneycontrol.com/stocks/autosuggest.php'
params = {'str': name}
return re.search(r'set_val\(\'{}\',\'(.*?)\'\)'.format(full_name), requests.get(url, params=params).text, flags=re.I)[1]
def get_table(sc_id, mycomp):
url = 'https://www.moneycontrol.com/stocks/hist_stock_result.php'
params = {
'ex':'B',
'sc_id': sc_id,
'mycomp': mycomp
}
data = {
'frm_dy':'01',
'frm_mth':'01',
'frm_yr':'2016',
'to_dy':'31',
'to_mth':'12',
'to_yr':'2016',
'hdn':'daily'
}
soup = BeautifulSoup(requests.post(url, data=data, params=params).content, 'html.parser')
return pd.read_html( str(soup.select_one('.tblchart')) )[0].droplevel(0, axis=1)
code = get_sc_id('AXIS', 'Axis Bank')
print('Axis Bank code: ', code)
print(get_table(code, 'Axis Bank'))
code = get_sc_id('BAJAJ', 'Bajaj Auto')
print('Bajaj Auto code:', code )
print(get_table(code, 'Bajaj Auto'))
Prints:
Axis Bank code: UTI10
Date Open High Low Close Volume (High-Low) (Open-Close)
0 30-12-2016 446.00 451.80 443.45 450.00 234037 8.35 -4.00
1 29-12-2016 447.00 447.00 437.80 444.15 267677 9.20 2.85
2 28-12-2016 437.45 447.85 436.00 439.50 251149 11.85 -2.05
3 27-12-2016 430.00 438.55 430.00 437.45 210857 8.55 -7.45
4 26-12-2016 432.15 436.00 427.00 431.75 405044 9.00 0.40
.. ... ... ... ... ... ... ... ...
242 07-01-2016 424.25 425.00 407.30 409.35 1441934 17.70 14.90
243 06-01-2016 439.70 439.70 429.80 430.80 730512 9.90 8.90
244 05-01-2016 439.00 440.00 433.65 436.35 726947 6.35 2.65
245 04-01-2016 448.85 448.85 437.40 439.25 743518 11.45 9.60
246 01-01-2016 450.00 452.70 445.80 449.80 433052 6.90 0.20
[247 rows x 8 columns]
Bajaj Auto code: BA06
Date Open High Low Close Volume (High-Low) (Open-Close)
0 30-12-2016 2655.55 2667.00 2627.25 2633.85 10377 39.75 21.70
1 29-12-2016 2621.00 2665.65 2611.50 2655.45 8704 54.15 -34.45
2 28-12-2016 2629.35 2653.00 2624.55 2631.60 6475 28.45 -2.25
3 27-12-2016 2563.00 2642.00 2563.00 2633.60 15491 79.00 -70.60
4 26-12-2016 2618.00 2618.35 2578.00 2596.70 7205 40.35 21.30
.. ... ... ... ... ... ... ... ...
242 07-01-2016 2470.00 2481.80 2407.25 2419.25 15962 74.55 50.75
243 06-01-2016 2495.00 2513.70 2475.00 2485.50 11975 38.70 9.50
244 05-01-2016 2518.00 2520.00 2480.00 2497.05 11967 40.00 20.95
245 04-01-2016 2507.90 2545.85 2480.65 2488.15 23077 65.20 19.75
246 01-01-2016 2530.00 2530.00 2512.15 2520.05 9055 17.85 9.95
[247 rows x 8 columns]

python - cannot make corr work

I'm struggling with getting a simple correlation done. I've tried all that was suggested under similar questions.
Here are the relevant parts of the code, the various attempts I've made and their results.
import numpy as np
import pandas as pd
try01 = data[['ESA Index_close_px', 'CCMP Index_close_px' ]].corr(method='pearson')
print (try01)
Out:
Empty DataFrame
Columns: []
Index: []
try04 = data['ESA Index_close_px'][5:50].corr(data['CCMP Index_close_px'][5:50])
print (try04)
Out:
**AttributeError: 'float' object has no attribute 'sqrt'**
using numpy
try05 = np.corrcoef(data['ESA Index_close_px'],data['CCMP Index_close_px'])
print (try05)
Out:
AttributeError: 'float' object has no attribute 'sqrt'
converting the columns to lists
ESA_Index_close_px_list = list()
start_value = 1
end_value = len (data['ESA Index_close_px']) +1
for items in data['ESA Index_close_px']:
ESA_Index_close_px_list.append(items)
start_value = start_value+1
if start_value == end_value:
break
else:
continue
CCMP_Index_close_px_list = list()
start_value = 1
end_value = len (data['CCMP Index_close_px']) +1
for items in data['CCMP Index_close_px']:
CCMP_Index_close_px_list.append(items)
start_value = start_value+1
if start_value == end_value:
break
else:
continue
try06 = np.corrcoef(['ESA_Index_close_px_list','CCMP_Index_close_px_list'])
print (try06)
Out:
****TypeError: cannot perform reduce with flexible type****
Also tried .astype but not made any difference.
data['ESA Index_close_px'].astype(float)
data['CCMP Index_close_px'].astype(float)
Using Python 3.5, pandas 0.18.1 and numpy 1.11.1
Would really appreciate any suggestion.
**edit1:*
Data is coming from an excel spreadsheet
data = pd.read_excel('C:\\Users\\Ako\\Desktop\\ako_files\\for_corr_‌​tool.xlsx') prior to the correlation attempts, there are only column renames and
data = data.drop(data.index[0])
to get rid of a line
regarding the types:
print (type (data['ESA Index_close_px']))
print (type (data['ESA Index_close_px'][1]))
Out:
**edit2*
parts of the data:
print (data['ESA Index_close_px'][1:10])
print (data['CCMP Index_close_px'][1:10])
Out:
2 2137
3 2138
4 2132
5 2123
6 2127
7 2126.25
8 2131.5
9 2134.5
10 2159
Name: ESA Index_close_px, dtype: object
2 5241.83
3 5246.41
4 5243.84
5 5199.82
6 5214.16
7 5213.33
8 5239.02
9 5246.79
10 5328.67
Name: CCMP Index_close_px, dtype: object
Well, I've encountered the same problem today.
try use .astype('float64') to help make the type correct.
data['ESA Index_close_px'][5:50].astype('float64').corr(data['CCMP Index_close_px'][5:50].astype('float64'))
This works well for me. Hope it can help you as well.
You can try as following:
Top15['Citable docs per capita']=(Top15['Citable docs per capita']*100000)
Top15['Citable docs per capita'].astype('int').corr(Top15['Energy Supply per Capita'].astype('int'))
It worked for me.

Resources