Trying to set the existing exchanges (inputs) of an activity to zero and additionally adding an exchange, the following is returned:
"MultipleResults("Multiple production exchanges found")"
"NoResults: No suitable production exchanges founds"
Firstly I set all the input amounts to zero except for the output:
for idx, item in enumerate(ds['exchanges']):
item['amount'] = 0
ds['exchanges'][0]['amount'] = 1
Secondly, I add the a new exchange:
ds['exchanges'].append({
'amount': 1,
'input': (new['database'], new['code']),
'type': 'technosphere',
'name': new['name'],
'location': new['location']
})
Writing the database in the last steps returns the errors.
w.write_brightway2_database(DB, NEW_DB_NAME)
Does anyone see where the problem could be or if there are alternative ways to replace multiple inputs with another one?
Thanks a lot for any hints!
Lukas
Full error traceback:
--------------------------------------------------------------------------
NoResults Traceback (most recent call last)
<ipython-input-6-d4f2dde2b33d> in <module>
2
3 NEW_DB_NAME = "ecoinvent_copy_new"
----> 4 w.write_brightway2_database(ecoinvent, NEW_DB_NAME)
5
6 # Check for new databases
~\Miniconda3\envs\ab\lib\site-packages\wurst\brightway\write_database.py in write_brightway2_database(data, name)
47
48 change_db_name(data, name)
---> 49 link_internal(data)
50 check_internal_linking(data)
51 check_duplicate_codes(data)
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in link_internal(data, fields)
11 input_databases = get_input_databases(data)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
---> 13 products = {
14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in <dictcomp>(.0)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
13 products = {
---> 14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
16 }
~\Miniconda3\envs\ab\lib\site-packages\wurst\searching.py in reference_product(ds)
82 and exc['type'] == 'production']
83 if not excs:
---> 84 raise NoResults("No suitable production exchanges founds")
85 elif len(excs) > 1:
86 raise MultipleResults("Multiple production exchanges found")
NoResults: No suitable production exchanges found
It seems that setting the exchanges to zero caused the problem. The database cannot be written in this case. What I did now is setting the exchanges to a very small number, so that they have no effect on the impact assessment, but are not zero. Not the most elegant way, but works for me. So if anyone has similar problems, that might be a quick solution.
My code
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()['id']
Error I get when I run
Traceback (most recent call last):
File "c:/Users/badam/Documents/Request/Testing/Group Checker.py", line 16, in <module>
groupsIds = groups.json()['id']
KeyError: 'id'
The list is pretty big and haves 100 list of ids anyway I can grab all of them and make them into a list?
here is the api im looking at https://groups.roblox.com/v2/users/1283171278/groups/roles
There's key data in the result, so iterate over it and get id from role and and group keys:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
for d in groupsIds['data']:
print(d['group']['id'])
print(d['role']['id'])
print('-' * 80)
Prints:
...
--------------------------------------------------------------------------------
3904940
26571394
--------------------------------------------------------------------------------
3933808
26746825
--------------------------------------------------------------------------------
3801996
25946726
--------------------------------------------------------------------------------
...
EDIT: To store all IDs to list, you can do:
import json
import requests
groups = requests.get(f'https://groups.roblox.com/v2/users/1283171278/groups/roles')
groupsIds = groups.json()
# uncomment this to print all data:
# print(json.dumps(groupsIds, indent=4))
all_ids = []
for d in groupsIds['data']:
all_ids.append(d['group']['id'])
all_ids.append(d['role']['id'])
print(all_ids)
Prints:
[4998084, 33338613, 4484383, 30158860, 4808983, 32158531, 3912486, 26617376, 4387638, 29571745, 3686254, 25244399, 4475916, 30106939, 4641294, 31134295, 713898, 4280806, 4093279, 27731862, 998593, 6351840, 913147, 5698957, 3516878, 24177537, 659270, 3905638, 4427759, 29811565, 587575, 3437383, 4332819, 29229751, 3478315, 23936152, 2811716, 19021100, 472523, 2752478, 4734036, 31698614, 3338157, 23028518, 1014052, 6471675, 4609359, 30934766, 3939008, 26778648, 4817725, 32211571, 601160, 3519151, 4946683, 33009776, 1208148, 7978533, 4449651, 29945000, 4634345, 31089984, 5026319, 33518257, 2629826, 17481464, 821916, 5031014, 4926232, 32881634, 4897605, 32702661, 3740736, 25575394, 448496, 2610176, 5036596, 33583400, 4876498, 32570198, 3165187, 21791248, 4792137, 32055453, 856700, 5279506, 4734914, 31704012, 1098712, 7127871, 4499514, 30252966, 3846707, 26215627, 1175898, 7725693, 4503430, 30276985, 3344443, 23071179, 2517476, 16546164, 4694162, 31461944, 780837, 4744647, 2616253, 17367584, 2908472, 19824345, 2557006, 16869679, 2536583, 16700979, 535476, 3126430, 3928108, 26712025, 641347, 3781071, 2931845, 20016645, 1233647, 8182827, 2745521, 18459791, 803463, 4902387, 490144, 2856446, 488760, 2848189, 3574179, 24536348, 4056581, 27505272, 1007736, 6423078, 4500976, 30261867, 898461, 5588947, 4161433, 28157771, 4053816, 27488481, 4774722, 31947028, 3091411, 21243745, 3640836, 24959291, 3576224, 24548933, 3770621, 25756964, 4142662, 28039034, 538165, 3142587, 539598, 3150924, 602671, 3528176, 537016, 3135630, 3175117, 21863370, 4633984, 31087818, 3904940, 26571394, 3933808, 26746825, 3801996, 25946726, 3818582, 26046898, 4056358, 27503773, 823297, 5040824, 4226680, 28562065, 4047138, 27446480, 4200090, 28397805, 821483, 5027958, 624104, 3658165, 3576317, 24549546, 3257526, 22460774]
print(groups.json()['data'][0]['group']['id'])
Try this code it returns the id
For iteration try this :
groupID = groups.json()['data']
length_of_data = len(groups.json()['data'])
for i in range(length_of_data):
print(groupID[i]['group']['id'])
Working with search console api,
made it through the basics.
Now i'm stuck on splitting and arranging the data:
When trying to split, i'm getting a NaN, nothing i try works.
46 ((174.0, 3753.0, 0.04636290967226219, 7.816147...
47 ((93.0, 2155.0, 0.0431554524361949, 6.59025522...
48 ((176.0, 4657.0, 0.037792570324243074, 6.90251...
49 ((20.0, 1102.0, 0.018148820326678767, 7.435571...
50 ((31.0, 1133.0, 0.02736098852603707, 8.0935569...
Name: test, dtype: object
When trying to manipulate the data like this (and similar interactions):
data=source['test'].tolist()
data
Its clear that the data is not really available...
[<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>]
Anyone have an idea how can i interact with my data ?
Thanks.
for reference, this is the code and the program i work with:
account = searchconsole.authenticate(client_config='client_secrets.json', credentials='credentials.json')
webproperty = account['https://www.example.com/']
def APIsc(date,keyword):
results=webproperty.query.range(date, days=-30).filter('query', keyword, 'contains').get()
return results
source['test']=source.apply(lambda x: APIsc(x.date, x.keyword), axis=1)
source
made by: https://github.com/joshcarty/google-searchconsole