How to fix Key error - "Groups" in using Fousquare API # Python? - python-3.x

I am trying to list nearby venues using get Nearby venues that are previously defined, and every line worked fine and then I cannot label properly nearby venues using Foursquare although its working fine ( I have to reset my Id and Secret as it just stop working). Im using Python 3.5 at Jupyter Notebook
What Im doing wrong? Thank you!!
BT_venues=getNearbyVenues(names=BT_df['Sector'],
latitudes=BT_df['Latitude'],
longitudes=BT_df['Longitude']
)
-----------------------------------------------------------------------
----
KeyError Traceback (most recent call
last)
<ipython-input-99-563e09cdcab5> in <module>()
1 BT_venues=getNearbyVenues(names=BT_df['Sector'],
2 latitudes=BT_df['Latitude'],
----> 3 longitudes=BT_df['Longitude']
4 )
<ipython-input-93-cfc09962ae0b> in getNearbyVenues(names, latitudes,
longitudes, radius)
18
19 # make the GET request
---> 20 results = requests.get(url).json()['response']
['groups'][0]
['items']
21
22 # return only relevant information for each nearby venue
KeyError: 'groups'
As for groups this was the code
venues = res['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# columns only
filtered_columns = ['venue.name', 'venue.categories',
'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# only one category per a row
nearby_venues['venue.categories'] =
nearby_venues.apply(get_category_type,
axis=1)
# columns cleaning up
nearby_venues.columns = [col.split(".")[-1] for col in
nearby_venues.columns]
nearby_venues.head()

Check response['meta'], you may have exceeded your quota.

If you need an instant resolution, create a new foursquare account. Then create new application and use your new client id and secret to call api

Related

Why Do I Keep Receving A "Requests.Exceptions.InvalidSchema: No Connection Adapters Were Found For '0" Error?

I'm trying to create a script that returns domain and backlink numbers for each URL held in a dataframe from an SEMRUSH API.
The dataframe cotaining the URLs has some of the following information:
0
0 www.ig.com/jp/trading-strategies/swing-trading...
1 www.ig.com/it/news-e-idee-di-trading/criptoval...
2 www.ig.com/uk/news-and-trade-ideas/the-omicron...
[1468 rows x 1 columns]
When I run my script I get the following error:
requests.exceptions.InvalidSchema: No connection adapters were found for '0 https://api.semrush.com/analytics/v1/?key=1f0e...\nName: 0, dtype: object'
Here is the part of the code that generates the error:
for index, url in gsdf.iterrows():
rr = requests.request("GET","https://api.semrush.com/analytics/v1/?key="+API_KEY+"&type=backlinks_tld&target="+url+"&target_type=url&export_columns=domains_num,backlinks_num&display_limit=1",headers=headers, data = payload)
data=json.loads(rr.text.encode('utf8'))
srdf=srdf.append({domains_num:data, backlinks_num:data}, ignore_index=True)
I'm not sure why this happens as I'm new to Python. Can you help?
Kind thanks
Mark

Setting all inputs of an activity to 0 in wurst and brightway

Trying to set the existing exchanges (inputs) of an activity to zero and additionally adding an exchange, the following is returned:
"MultipleResults("Multiple production exchanges found")"
"NoResults: No suitable production exchanges founds"
Firstly I set all the input amounts to zero except for the output:
for idx, item in enumerate(ds['exchanges']):
item['amount'] = 0
ds['exchanges'][0]['amount'] = 1
Secondly, I add the a new exchange:
ds['exchanges'].append({
'amount': 1,
'input': (new['database'], new['code']),
'type': 'technosphere',
'name': new['name'],
'location': new['location']
})
Writing the database in the last steps returns the errors.
w.write_brightway2_database(DB, NEW_DB_NAME)
Does anyone see where the problem could be or if there are alternative ways to replace multiple inputs with another one?
Thanks a lot for any hints!
Lukas
Full error traceback:
--------------------------------------------------------------------------
NoResults Traceback (most recent call last)
<ipython-input-6-d4f2dde2b33d> in <module>
2
3 NEW_DB_NAME = "ecoinvent_copy_new"
----> 4 w.write_brightway2_database(ecoinvent, NEW_DB_NAME)
5
6 # Check for new databases
~\Miniconda3\envs\ab\lib\site-packages\wurst\brightway\write_database.py in write_brightway2_database(data, name)
47
48 change_db_name(data, name)
---> 49 link_internal(data)
50 check_internal_linking(data)
51 check_duplicate_codes(data)
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in link_internal(data, fields)
11 input_databases = get_input_databases(data)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
---> 13 products = {
14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in <dictcomp>(.0)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
13 products = {
---> 14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
16 }
~\Miniconda3\envs\ab\lib\site-packages\wurst\searching.py in reference_product(ds)
82 and exc['type'] == 'production']
83 if not excs:
---> 84 raise NoResults("No suitable production exchanges founds")
85 elif len(excs) > 1:
86 raise MultipleResults("Multiple production exchanges found")
NoResults: No suitable production exchanges found
It seems that setting the exchanges to zero caused the problem. The database cannot be written in this case. What I did now is setting the exchanges to a very small number, so that they have no effect on the impact assessment, but are not zero. Not the most elegant way, but works for me. So if anyone has similar problems, that might be a quick solution.

Python how do I get all ids from this api dictionary

My code
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()['id']
Error I get when I run
Traceback (most recent call last):
File "c:/Users/badam/Documents/Request/Testing/Group Checker.py", line 16, in <module>
groupsIds = groups.json()['id']
KeyError: 'id'
The list is pretty big and haves 100 list of ids anyway I can grab all of them and make them into a list?
here is the api im looking at https://groups.roblox.com/v2/users/1283171278/groups/roles
There's key data in the result, so iterate over it and get id from role and and group keys:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
for d in groupsIds['data']:
print(d['group']['id'])
print(d['role']['id'])
print('-' * 80)
Prints:
...
--------------------------------------------------------------------------------
3904940
26571394
--------------------------------------------------------------------------------
3933808
26746825
--------------------------------------------------------------------------------
3801996
25946726
--------------------------------------------------------------------------------
...
EDIT: To store all IDs to list, you can do:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
all_ids = []
for d in groupsIds['data']:
all_ids.append(d['group']['id'])
all_ids.append(d['role']['id'])
print(all_ids)
Prints:
[4998084, 33338613, 4484383, 30158860, 4808983, 32158531, 3912486, 26617376, 4387638, 29571745, 3686254, 25244399, 4475916, 30106939, 4641294, 31134295, 713898, 4280806, 4093279, 27731862, 998593, 6351840, 913147, 5698957, 3516878, 24177537, 659270, 3905638, 4427759, 29811565, 587575, 3437383, 4332819, 29229751, 3478315, 23936152, 2811716, 19021100, 472523, 2752478, 4734036, 31698614, 3338157, 23028518, 1014052, 6471675, 4609359, 30934766, 3939008, 26778648, 4817725, 32211571, 601160, 3519151, 4946683, 33009776, 1208148, 7978533, 4449651, 29945000, 4634345, 31089984, 5026319, 33518257, 2629826, 17481464, 821916, 5031014, 4926232, 32881634, 4897605, 32702661, 3740736, 25575394, 448496, 2610176, 5036596, 33583400, 4876498, 32570198, 3165187, 21791248, 4792137, 32055453, 856700, 5279506, 4734914, 31704012, 1098712, 7127871, 4499514, 30252966, 3846707, 26215627, 1175898, 7725693, 4503430, 30276985, 3344443, 23071179, 2517476, 16546164, 4694162, 31461944, 780837, 4744647, 2616253, 17367584, 2908472, 19824345, 2557006, 16869679, 2536583, 16700979, 535476, 3126430, 3928108, 26712025, 641347, 3781071, 2931845, 20016645, 1233647, 8182827, 2745521, 18459791, 803463, 4902387, 490144, 2856446, 488760, 2848189, 3574179, 24536348, 4056581, 27505272, 1007736, 6423078, 4500976, 30261867, 898461, 5588947, 4161433, 28157771, 4053816, 27488481, 4774722, 31947028, 3091411, 21243745, 3640836, 24959291, 3576224, 24548933, 3770621, 25756964, 4142662, 28039034, 538165, 3142587, 539598, 3150924, 602671, 3528176, 537016, 3135630, 3175117, 21863370, 4633984, 31087818, 3904940, 26571394, 3933808, 26746825, 3801996, 25946726, 3818582, 26046898, 4056358, 27503773, 823297, 5040824, 4226680, 28562065, 4047138, 27446480, 4200090, 28397805, 821483, 5027958, 624104, 3658165, 3576317, 24549546, 3257526, 22460774]
print(groups.json()['data'][0]['group']['id'])
Try this code it returns the id
For iteration try this :
groupID = groups.json()['data']
length_of_data = len(groups.json()['data'])
for i in range(length_of_data):
print(groupID[i]['group']['id'])

Why am i getting <searchconsole.query.Report(rows=1)> instead of numbers/strs

Working with search console api,
made it through the basics.
Now i'm stuck on splitting and arranging the data:
When trying to split, i'm getting a NaN, nothing i try works.
46 ((174.0, 3753.0, 0.04636290967226219, 7.816147...
47 ((93.0, 2155.0, 0.0431554524361949, 6.59025522...
48 ((176.0, 4657.0, 0.037792570324243074, 6.90251...
49 ((20.0, 1102.0, 0.018148820326678767, 7.435571...
50 ((31.0, 1133.0, 0.02736098852603707, 8.0935569...
Name: test, dtype: object
When trying to manipulate the data like this (and similar interactions):
data=source['test'].tolist()
data
Its clear that the data is not really available...
[<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>]
Anyone have an idea how can i interact with my data ?
Thanks.
for reference, this is the code and the program i work with:
account = searchconsole.authenticate(client_config='client_secrets.json', credentials='credentials.json')
webproperty = account['https://www.example.com/']
def APIsc(date,keyword):
results=webproperty.query.range(date, days=-30).filter('query', keyword, 'contains').get()
return results
source['test']=source.apply(lambda x: APIsc(x.date, x.keyword), axis=1)
source
made by: https://github.com/joshcarty/google-searchconsole

for loop over list KeyError: 664

I am trying to iterate this list with words as
CTCCTC TCCTCT CCTCTC CTCTCC TCTCCC CTCCCA TCCCAA CCCAAA CCAAAC CAAACT
CTGGGC TGGGCC GGGCCA GGCCAA GCCAAT CCAATG CAATGC AATGCC ATGCCT TGCCTG GCCTGC
TGCCAG GCCAGG CCAGGA CAGGAG AGGAGG GGAGGG GAGGGG AGGGGC GGGGCT GGGCTG GGCTGG GCTGGT CTGGTC
TGGTCT GGTCTG GTCTGG TCTGGA CTGGAC TGGACA GGACAC GACACT ACACTA CACTAT
ATTCAG TTCAGC TCAGCC CAGCCA AGCCAG GCCAGT CCAGTC CAGTCA AGTCAA GTCAAC TCAACA CAACAC AACACA
ACACAA CACAAG ACAAGG AGGTGG GGTGGC GTGGCC TGGCCT GGCCTG GCCTGC CCTGCA CTGCAC
TGCACT GCACTC CACTCG ACTCGA CTCGAG TCGAGG CGAGGT GAGGTT AGGTTC GGTTCC
TATATA ATATAC TATACC ATACCT TACCTG ACCTGG CCTGGT CTGGTA TGGTAA GGTAAT GTAATG TAATGG AATGGA
I am trying for loop to read each item in the list and parse it through mk_model.vector
the code used is as follows
for x in all_seq_sentences[:]:
mk_model.vector(x)
print(x)
Usually, mk_model.vector("AGT") will give an array corresponding to defines dna2vec model, But here rather than actually performing the model run it throws error as
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-144-77c47b13e98a> in <module>
1 for x in all_seq_sentences[:]:
----> 2 mk_model.vector(x)
3 print(x)
4
~/Desktop/DNA2vec/dna2vec/dna2vec/multi_k_model.py in vector(self, vocab)
35
36 def vector(self, vocab):
---> 37 return self.data[len(vocab)].model[vocab]
38
39 def unitvec(self, vec):
KeyError: 664
Looking forward to some help here
The above problem was having issues because the for loop took all items in first line as one item, which is why .split() was best solution of it. To read follow https://python-reference.readthedocs.io/en/latest/docs/str/split.html
working code:
for i in all_seq_sentences:
word = i.split()
print(word[0])
and then later implement another loop to access the model.vector function
vec_of_all_seq = []
for sentence in all_seq_sentences:
sentence = sentence.split()
for word in sentence:
vec_of_all_seq.append(mk_model.vector(word))
vector representation derived from model.vector will be saved in numpy array named vec_of_all_seq.

Resources