Python how do I get all ids from this api dictionary - python-3.x

My code
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()['id']
Error I get when I run
Traceback (most recent call last):
File "c:/Users/badam/Documents/Request/Testing/Group Checker.py", line 16, in <module>
groupsIds = groups.json()['id']
KeyError: 'id'
The list is pretty big and haves 100 list of ids anyway I can grab all of them and make them into a list?
here is the api im looking at https://groups.roblox.com/v2/users/1283171278/groups/roles

There's key data in the result, so iterate over it and get id from role and and group keys:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
for d in groupsIds['data']:
print(d['group']['id'])
print(d['role']['id'])
print('-' * 80)
Prints:
...
--------------------------------------------------------------------------------
3904940
26571394
--------------------------------------------------------------------------------
3933808
26746825
--------------------------------------------------------------------------------
3801996
25946726
--------------------------------------------------------------------------------
...
EDIT: To store all IDs to list, you can do:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
all_ids = []
for d in groupsIds['data']:
all_ids.append(d['group']['id'])
all_ids.append(d['role']['id'])
print(all_ids)
Prints:
[4998084, 33338613, 4484383, 30158860, 4808983, 32158531, 3912486, 26617376, 4387638, 29571745, 3686254, 25244399, 4475916, 30106939, 4641294, 31134295, 713898, 4280806, 4093279, 27731862, 998593, 6351840, 913147, 5698957, 3516878, 24177537, 659270, 3905638, 4427759, 29811565, 587575, 3437383, 4332819, 29229751, 3478315, 23936152, 2811716, 19021100, 472523, 2752478, 4734036, 31698614, 3338157, 23028518, 1014052, 6471675, 4609359, 30934766, 3939008, 26778648, 4817725, 32211571, 601160, 3519151, 4946683, 33009776, 1208148, 7978533, 4449651, 29945000, 4634345, 31089984, 5026319, 33518257, 2629826, 17481464, 821916, 5031014, 4926232, 32881634, 4897605, 32702661, 3740736, 25575394, 448496, 2610176, 5036596, 33583400, 4876498, 32570198, 3165187, 21791248, 4792137, 32055453, 856700, 5279506, 4734914, 31704012, 1098712, 7127871, 4499514, 30252966, 3846707, 26215627, 1175898, 7725693, 4503430, 30276985, 3344443, 23071179, 2517476, 16546164, 4694162, 31461944, 780837, 4744647, 2616253, 17367584, 2908472, 19824345, 2557006, 16869679, 2536583, 16700979, 535476, 3126430, 3928108, 26712025, 641347, 3781071, 2931845, 20016645, 1233647, 8182827, 2745521, 18459791, 803463, 4902387, 490144, 2856446, 488760, 2848189, 3574179, 24536348, 4056581, 27505272, 1007736, 6423078, 4500976, 30261867, 898461, 5588947, 4161433, 28157771, 4053816, 27488481, 4774722, 31947028, 3091411, 21243745, 3640836, 24959291, 3576224, 24548933, 3770621, 25756964, 4142662, 28039034, 538165, 3142587, 539598, 3150924, 602671, 3528176, 537016, 3135630, 3175117, 21863370, 4633984, 31087818, 3904940, 26571394, 3933808, 26746825, 3801996, 25946726, 3818582, 26046898, 4056358, 27503773, 823297, 5040824, 4226680, 28562065, 4047138, 27446480, 4200090, 28397805, 821483, 5027958, 624104, 3658165, 3576317, 24549546, 3257526, 22460774]

print(groups.json()['data'][0]['group']['id'])
Try this code it returns the id
For iteration try this :
groupID = groups.json()['data']
length_of_data = len(groups.json()['data'])
for i in range(length_of_data):
print(groupID[i]['group']['id'])

Related

Invalid Number of Arguments when trying to add arrays inside a list using python

I am trying to understand why I get invalid number of argument in the below code and if there is a way to fix it.
here is the code:
import numpy as np
acc_reading = []
a = np.array([0.11e+00, 1.11e-08, 1.11e-02])
b = np.array([0.12e+00, 1.22e-08, 2.22e-02])
c = np.array([3.11e+00, 3.18e-08, 3.33e-02])
d = np.array([3.41e+00, 4.18e-08, 4.31e-02])
e = np.array([0.55e+00, 1.55e-08, 5.31e-02])
f = np.array([0.66e+00, 1.66e-08, 3.66e-02])
g = np.array([0.66e+00, 1.66e-08, 3.66e-02])
h = np.array([0.66e+00, 1.66e-08, 3.66e-02])
ab = np.add(a,b)
cd = np.add(c, d)
ef = np.add(e, f)
i = np.add(g, h)
acc_reading.append(ab)
acc_reading.append(cd)
acc_reading.append(ef)
acc_reading.append(i)
kk = np.add(acc_reading[0], acc_reading[1], acc_reading[2], acc_reading[3])
The output of the above code is:
ValueError: invalid number of arguments
Read the docs for np.add and np.sum
Your lists - 4 terms of size 3
In [213]: acc_reading
Out[213]:
[array([2.30e-01, 2.33e-08, 3.33e-02]),
array([6.52e+00, 7.36e-08, 7.64e-02]),
array([1.21e+00, 3.21e-08, 8.97e-02]),
array([1.32e+00, 3.32e-08, 7.32e-02])]
sum all values to one:
In [214]: np.sum(acc_reading)
Out[214]: 9.5526001622
sum rows and columns - treating acc_reading as (3,4) array:
In [215]: np.sum(acc_reading, axis=0)
Out[215]: array([9.280e+00, 1.622e-07, 2.726e-01])
In [216]: np.sum(acc_reading, axis=1)
Out[216]: array([0.26330002, 6.59640007, 1.29970003, 1.39320003])
Your attempt to use np.add (which you used correctly earlier)
In [217]: np.add(acc_reading[0], acc_reading[1], acc_reading[2], acc_reading[3])
...:
Traceback (most recent call last):
File "<ipython-input-217-52b8f942b588>", line 1, in <module>
np.add(acc_reading[0], acc_reading[1], acc_reading[2], acc_reading[3])
TypeError: add() takes from 2 to 3 positional arguments but 4 were given
giving it just 2 arrays:
In [218]: np.add(acc_reading[0], acc_reading[1])
Out[218]: array([6.750e+00, 9.690e-08, 1.097e-01])
a more direct way:
In [220]: arr = np.array([a+b, c+d, e+f, g+h])
In [221]: arr
Out[221]:
array([[2.30e-01, 2.33e-08, 3.33e-02],
[6.52e+00, 7.36e-08, 7.64e-02],
[1.21e+00, 3.21e-08, 8.97e-02],
[1.32e+00, 3.32e-08, 7.32e-02]])
In [222]: arr.sum()
Out[222]: 9.5526001622

how to replace stock-prices symbols in a dataframe

I would like to get the S&P 500 ['Adj Close'] column and replace the column with the corresponding stock symbol, however, I am not able to replace the dataframe columns because it gives me an error: KeyError: '5'
What I would like to achieve is to loop through all the available stocks from the list and replace the Adj Close with the stock symbol.
This is what I did:
First I have scraped the stock symbols from Wikipedia and added them to a list.
data = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
symbols = data[0] # get first column
symbols.head()
stock = symbols['Symbol'].to_list()
print(stock[0:5])
this gives me a list of stock symbols as below:
['MMM', 'ABT', 'ABBV', 'ABMD', 'ACN']
then I scraped Yahoo finance to get the daily financial data as below
stock_url = 'https://query1.finance.yahoo.com/v7/finance/download/{}?'
params = {
'range' : '1y',
'interval' : '1d',
'events' : 'history'
}
response = requests.get(stock_url.format(stock[0]), params=params)
file = StringIO(response.text)
reader = csv.reader(file)
data = list(reader)
df = pd.DataFrame(data)
stock_data = df['5']
Fix for key error
You are calling the the url using the list 'stock' and it gives a 404 response when I tried.
Call the URL with individual stock like below,
requests.get(stock_url.format(stock[0]), params=params)
Also do below, The column 5 is stored as integer instead of character. That is the reason you got 'key error'
stock_data = df[5]
I tried for stock 'MMM' - stock[0] and it prints below:
0 1 2 3 4 5 \
0 Date Open High Low Close Adj Close
1 2019-12-11 168.380005 168.839996 167.330002 168.740005 162.682480
2 2019-12-12 166.729996 170.850006 166.330002 168.559998 162.508926
3 2019-12-13 169.619995 171.119995 168.080002 168.789993 162.730667
4 2019-12-16 168.940002 170.830002 168.190002 170.750000 164.620316
.. ... ... ... ... ... ...
249 2020-12-04 172.130005 173.160004 171.539993 172.460007 172.460007
250 2020-12-07 171.720001 172.500000 169.179993 170.149994 170.149994
251 2020-12-08 169.740005 172.830002 169.699997 172.460007 172.460007
252 2020-12-09 172.669998 175.639999 171.929993 175.289993 175.289993
253 2020-12-10 174.869995 175.399994 172.690002 173.490005 173.490005
[254 rows x 7 columns]
Loop through stocks and replace Adj Close (Edited as per requirements from comments)
Code for looping through stocks and replacing Adj close with Stock symbol.
stock_url = 'https://query1.finance.yahoo.com/v7/finance/download/{}?'
params = {
'range' : '1y',
'interval' : '1d',
'events' : 'history'
}
df = pd.DataFrame()
for i in stock:
response = requests.get(stock_url.format(i), params=params)
file = io.StringIO(response.text)
reader = csv.reader(file)
data = list(reader)
df1 = pd.DataFrame(data)
df1.loc[df1[5] == 'Adj Close',5] = i
df = df.append(df1)
Tried the code for first 3 stocks and here it is:

for loop over list KeyError: 664

I am trying to iterate this list with words as
CTCCTC TCCTCT CCTCTC CTCTCC TCTCCC CTCCCA TCCCAA CCCAAA CCAAAC CAAACT
CTGGGC TGGGCC GGGCCA GGCCAA GCCAAT CCAATG CAATGC AATGCC ATGCCT TGCCTG GCCTGC
TGCCAG GCCAGG CCAGGA CAGGAG AGGAGG GGAGGG GAGGGG AGGGGC GGGGCT GGGCTG GGCTGG GCTGGT CTGGTC
TGGTCT GGTCTG GTCTGG TCTGGA CTGGAC TGGACA GGACAC GACACT ACACTA CACTAT
ATTCAG TTCAGC TCAGCC CAGCCA AGCCAG GCCAGT CCAGTC CAGTCA AGTCAA GTCAAC TCAACA CAACAC AACACA
ACACAA CACAAG ACAAGG AGGTGG GGTGGC GTGGCC TGGCCT GGCCTG GCCTGC CCTGCA CTGCAC
TGCACT GCACTC CACTCG ACTCGA CTCGAG TCGAGG CGAGGT GAGGTT AGGTTC GGTTCC
TATATA ATATAC TATACC ATACCT TACCTG ACCTGG CCTGGT CTGGTA TGGTAA GGTAAT GTAATG TAATGG AATGGA
I am trying for loop to read each item in the list and parse it through mk_model.vector
the code used is as follows
for x in all_seq_sentences[:]:
mk_model.vector(x)
print(x)
Usually, mk_model.vector("AGT") will give an array corresponding to defines dna2vec model, But here rather than actually performing the model run it throws error as
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-144-77c47b13e98a> in <module>
1 for x in all_seq_sentences[:]:
----> 2 mk_model.vector(x)
3 print(x)
4
~/Desktop/DNA2vec/dna2vec/dna2vec/multi_k_model.py in vector(self, vocab)
35
36 def vector(self, vocab):
---> 37 return self.data[len(vocab)].model[vocab]
38
39 def unitvec(self, vec):
KeyError: 664
Looking forward to some help here
The above problem was having issues because the for loop took all items in first line as one item, which is why .split() was best solution of it. To read follow https://python-reference.readthedocs.io/en/latest/docs/str/split.html
working code:
for i in all_seq_sentences:
word = i.split()
print(word[0])
and then later implement another loop to access the model.vector function
vec_of_all_seq = []
for sentence in all_seq_sentences:
sentence = sentence.split()
for word in sentence:
vec_of_all_seq.append(mk_model.vector(word))
vector representation derived from model.vector will be saved in numpy array named vec_of_all_seq.

how to perform calculation for all elements of list parellely or simultaneously to reduce the run time?

I am extracting a dataframe from yahoo API for a list of companies but I want to do it paralelly(i.e, the dataframes for all the companies should be extracted simultaneously).So, it reduces my run time....
My code:
import pandas_datareader.data as web
import pandas as pd
import datetime
end_date = datetime.datetime.now().strftime('%d/%m/%Y')
temp = datetime.datetime.now() - datetime.timedelta(6*365/12)
start_date = temp.strftime('%d/%m/%Y')
f = web.DataReader('ACC.NS', 'yahoo', start_date, end_date)
print(f)
The output of this code is a dataframe shown below:
This i have done for single company....
I want to do it for a list of companies which is:
Company_Names = ['ACC', 'ADANIENT', 'ADANIPORTS', 'ADANIPOWER', 'AJANTPHARM', 'ALBK', 'AMARAJABAT', 'AMBUJACEM', 'APOLLOHOSP', 'APOLLOTYRE', 'ARVIND', 'ASHOKLEY', 'ASIANPAINT', 'AUROPHARMA', 'AXISBANK',
'BAJAJ-AUTO', 'BAJFINANCE', 'BAJAJFINSV', 'BALKRISIND', 'BANKBARODA', 'BANKINDIA', 'BATAINDIA', 'BEML', 'BERGEPAINT', 'BEL', 'BHARATFIN', 'BHARATFORG', 'BPCL', 'BHARTIARTL', 'INFRATEL', 'BHEL', 'BIOCON', 'BOSCHLTD', 'BRITANNIA',
'CADILAHC', 'CANFINHOME', 'CANBK', 'CAPF', 'CASTROLIND', 'CEATLTD', 'CENTURYTEX', 'CESC', 'CGPOWER', 'CHENNPETRO', 'CHOLAFIN', 'CIPLA', 'COALINDIA', 'COLPAL', 'CONCOR', 'CUMMINSIND', 'DABUR', 'DCBBANK',
'DHFL', 'DISHTV', 'DIVISLAB', 'DLF', 'DRREDDY', 'EICHERMOT', 'ENGINERSIN', 'EQUITAS', 'ESCORTS', 'EXIDEIND',
'FEDERALBNK', 'GAIL', 'GLENMARK', 'GMRINFRA', 'GODFRYPHLP', 'GODREJCP', 'GODREJIND', 'GRANULES', 'GRASIM', 'GSFC', 'HAVELLS', 'HCLTECH', 'HDFCBANK', 'HDFC', 'HEROMOTOCO', 'HEXAWARE', 'HINDALCO', 'HCC', 'HINDPETRO', 'HINDUNILVR',
'HINDZINC', 'ICICIBANK', 'ICICIPRULI', 'IDBI', 'IDEA', 'IDFCBANK', 'IDFC', 'IFCI', 'IBULHSGFIN', 'INDIANB', 'IOC', 'IGL', 'INDUSINDBK', 'INFIBEAM', 'INFY', 'INDIGO', 'IRB', 'ITC', 'JISLJALEQS', 'JPASSOCIAT', 'JETAIRWAYS', 'JINDALSTEL',
'JSWSTEEL', 'JUBLFOOD', 'JUSTDIAL', 'KAJARIACER', 'KTKBANK', 'KSCL', 'KOTAKBANK', 'KPIT', 'L&TFH', 'LT', 'LICHSGFIN', 'LUPIN', 'M&MFIN', 'MGL', 'M&M', 'MANAPPURAM', 'MRPL', 'MARICO', 'MARUTI', 'MFSL', 'MINDTREE', 'MOTHERSUMI', 'MRF', 'MCX',
'MUTHOOTFIN', 'NATIONALUM', 'NBCC', 'NCC', 'NESTLEIND', 'NHPC', 'NIITTECH', 'NMDC', 'NTPC', 'ONGC', 'OIL', 'OFSS', 'ORIENTBANK', 'PAGEIND', 'PCJEWELLER', 'PETRONET', 'PIDILITIND', 'PEL', 'PFC', 'POWERGRID', 'PTC', 'PNB', 'PVR', 'RAYMOND',
'RBLBANK', 'RELCAPITAL', 'RCOM', 'RELIANCE', 'RELINFRA', 'RPOWER', 'REPCOHOME', 'RECLTD', 'SHREECEM', 'SRTRANSFIN', 'SIEMENS', 'SREINFRA', 'SRF', 'SBIN', 'SAIL', 'STAR', 'SUNPHARMA', 'SUNTV', 'SUZLON', 'SYNDIBANK', 'TATACHEM', 'TATACOMM', 'TCS',
'TATAELXSI', 'TATAGLOBAL', 'TATAMTRDVR', 'TATAMOTORS', 'TATAPOWER', 'TATASTEEL', 'TECHM', 'INDIACEM', 'RAMCOCEM', 'SOUTHBANK', 'TITAN', 'TORNTPHARM', 'TORNTPOWER', 'TV18BRDCST', 'TVSMOTOR', 'UJJIVAN', 'ULTRACEMCO', 'UNIONBANK', 'UBL', 'UPL',
'VEDL', 'VGUARD', 'VOLTAS', 'WIPRO', 'WOCKPHARMA', 'YESBANK', 'ZEEL']
For all this companies the dataframe 'f' should be extracted parallely in order to save run time. Can anyone help me to solve this?

How to fix Key error - "Groups" in using Fousquare API # Python?

I am trying to list nearby venues using get Nearby venues that are previously defined, and every line worked fine and then I cannot label properly nearby venues using Foursquare although its working fine ( I have to reset my Id and Secret as it just stop working). Im using Python 3.5 at Jupyter Notebook
What Im doing wrong? Thank you!!
BT_venues=getNearbyVenues(names=BT_df['Sector'],
latitudes=BT_df['Latitude'],
longitudes=BT_df['Longitude']
)
-----------------------------------------------------------------------
----
KeyError Traceback (most recent call
last)
<ipython-input-99-563e09cdcab5> in <module>()
1 BT_venues=getNearbyVenues(names=BT_df['Sector'],
2 latitudes=BT_df['Latitude'],
----> 3 longitudes=BT_df['Longitude']
4 )
<ipython-input-93-cfc09962ae0b> in getNearbyVenues(names, latitudes,
longitudes, radius)
18
19 # make the GET request
---> 20 results = requests.get(url).json()['response']
['groups'][0]
['items']
21
22 # return only relevant information for each nearby venue
KeyError: 'groups'
As for groups this was the code
venues = res['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# columns only
filtered_columns = ['venue.name', 'venue.categories',
'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# only one category per a row
nearby_venues['venue.categories'] =
nearby_venues.apply(get_category_type,
axis=1)
# columns cleaning up
nearby_venues.columns = [col.split(".")[-1] for col in
nearby_venues.columns]
nearby_venues.head()
Check response['meta'], you may have exceeded your quota.
If you need an instant resolution, create a new foursquare account. Then create new application and use your new client id and secret to call api

Resources