Trying to set the existing exchanges (inputs) of an activity to zero and additionally adding an exchange, the following is returned:
"MultipleResults("Multiple production exchanges found")"
"NoResults: No suitable production exchanges founds"
Firstly I set all the input amounts to zero except for the output:
for idx, item in enumerate(ds['exchanges']):
item['amount'] = 0
ds['exchanges'][0]['amount'] = 1
Secondly, I add the a new exchange:
ds['exchanges'].append({
'amount': 1,
'input': (new['database'], new['code']),
'type': 'technosphere',
'name': new['name'],
'location': new['location']
})
Writing the database in the last steps returns the errors.
w.write_brightway2_database(DB, NEW_DB_NAME)
Does anyone see where the problem could be or if there are alternative ways to replace multiple inputs with another one?
Thanks a lot for any hints!
Lukas
Full error traceback:
--------------------------------------------------------------------------
NoResults Traceback (most recent call last)
<ipython-input-6-d4f2dde2b33d> in <module>
2
3 NEW_DB_NAME = "ecoinvent_copy_new"
----> 4 w.write_brightway2_database(ecoinvent, NEW_DB_NAME)
5
6 # Check for new databases
~\Miniconda3\envs\ab\lib\site-packages\wurst\brightway\write_database.py in write_brightway2_database(data, name)
47
48 change_db_name(data, name)
---> 49 link_internal(data)
50 check_internal_linking(data)
51 check_duplicate_codes(data)
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in link_internal(data, fields)
11 input_databases = get_input_databases(data)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
---> 13 products = {
14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in <dictcomp>(.0)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
13 products = {
---> 14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
16 }
~\Miniconda3\envs\ab\lib\site-packages\wurst\searching.py in reference_product(ds)
82 and exc['type'] == 'production']
83 if not excs:
---> 84 raise NoResults("No suitable production exchanges founds")
85 elif len(excs) > 1:
86 raise MultipleResults("Multiple production exchanges found")
NoResults: No suitable production exchanges found
It seems that setting the exchanges to zero caused the problem. The database cannot be written in this case. What I did now is setting the exchanges to a very small number, so that they have no effect on the impact assessment, but are not zero. Not the most elegant way, but works for me. So if anyone has similar problems, that might be a quick solution.
Related
Working with search console api,
made it through the basics.
Now i'm stuck on splitting and arranging the data:
When trying to split, i'm getting a NaN, nothing i try works.
46 ((174.0, 3753.0, 0.04636290967226219, 7.816147...
47 ((93.0, 2155.0, 0.0431554524361949, 6.59025522...
48 ((176.0, 4657.0, 0.037792570324243074, 6.90251...
49 ((20.0, 1102.0, 0.018148820326678767, 7.435571...
50 ((31.0, 1133.0, 0.02736098852603707, 8.0935569...
Name: test, dtype: object
When trying to manipulate the data like this (and similar interactions):
data=source['test'].tolist()
data
Its clear that the data is not really available...
[<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>,
<searchconsole.query.Report(rows=1)>]
Anyone have an idea how can i interact with my data ?
Thanks.
for reference, this is the code and the program i work with:
account = searchconsole.authenticate(client_config='client_secrets.json', credentials='credentials.json')
webproperty = account['https://www.example.com/']
def APIsc(date,keyword):
results=webproperty.query.range(date, days=-30).filter('query', keyword, 'contains').get()
return results
source['test']=source.apply(lambda x: APIsc(x.date, x.keyword), axis=1)
source
made by: https://github.com/joshcarty/google-searchconsole
I am trying to iterate this list with words as
CTCCTC TCCTCT CCTCTC CTCTCC TCTCCC CTCCCA TCCCAA CCCAAA CCAAAC CAAACT
CTGGGC TGGGCC GGGCCA GGCCAA GCCAAT CCAATG CAATGC AATGCC ATGCCT TGCCTG GCCTGC
TGCCAG GCCAGG CCAGGA CAGGAG AGGAGG GGAGGG GAGGGG AGGGGC GGGGCT GGGCTG GGCTGG GCTGGT CTGGTC
TGGTCT GGTCTG GTCTGG TCTGGA CTGGAC TGGACA GGACAC GACACT ACACTA CACTAT
ATTCAG TTCAGC TCAGCC CAGCCA AGCCAG GCCAGT CCAGTC CAGTCA AGTCAA GTCAAC TCAACA CAACAC AACACA
ACACAA CACAAG ACAAGG AGGTGG GGTGGC GTGGCC TGGCCT GGCCTG GCCTGC CCTGCA CTGCAC
TGCACT GCACTC CACTCG ACTCGA CTCGAG TCGAGG CGAGGT GAGGTT AGGTTC GGTTCC
TATATA ATATAC TATACC ATACCT TACCTG ACCTGG CCTGGT CTGGTA TGGTAA GGTAAT GTAATG TAATGG AATGGA
I am trying for loop to read each item in the list and parse it through mk_model.vector
the code used is as follows
for x in all_seq_sentences[:]:
mk_model.vector(x)
print(x)
Usually, mk_model.vector("AGT") will give an array corresponding to defines dna2vec model, But here rather than actually performing the model run it throws error as
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-144-77c47b13e98a> in <module>
1 for x in all_seq_sentences[:]:
----> 2 mk_model.vector(x)
3 print(x)
4
~/Desktop/DNA2vec/dna2vec/dna2vec/multi_k_model.py in vector(self, vocab)
35
36 def vector(self, vocab):
---> 37 return self.data[len(vocab)].model[vocab]
38
39 def unitvec(self, vec):
KeyError: 664
Looking forward to some help here
The above problem was having issues because the for loop took all items in first line as one item, which is why .split() was best solution of it. To read follow https://python-reference.readthedocs.io/en/latest/docs/str/split.html
working code:
for i in all_seq_sentences:
word = i.split()
print(word[0])
and then later implement another loop to access the model.vector function
vec_of_all_seq = []
for sentence in all_seq_sentences:
sentence = sentence.split()
for word in sentence:
vec_of_all_seq.append(mk_model.vector(word))
vector representation derived from model.vector will be saved in numpy array named vec_of_all_seq.
I am trying to reading this pajek file in Google Colab's version of Jupyter and I get an error when executing the following very simple code:
J = nx.MultiDiGraph()
J=nx.read_pajek("/content/data/graphdatasets/jazz.net")
print(nx.info(J))
The error is the following:
/usr/local/lib/python3.6/dist-packages/networkx/readwrite/pajek.py in parse_pajek(lines)
211 except AttributeError:
212 splitline = shlex.split(str(l))
--> 213 id, label = splitline[0:2]
214 labels.append(label)
215 G.add_node(label)
ValueError: not enough values to unpack (expected 2, got 1)
With pip show networkx, I see that I'm running Networkx version: 2.3. Am I doing something wrong in the code?
Update: Pasting below the file's first few lines:
*Vertices 198
*Arcs
*Edges
1 8 1
1 24 1
1 35 1
1 42 1
1 46 1
1 60 1
1 74 1
1 78 1
According to the Pajek definition the first two lines of your file are not according to the standard. After *vertices n, n lines with details about the vertices are expected. In addition, *edges and *arcs is a duplicate. NetworkX assumes use for an edge list, which started with *arcs a MultiDiGraph and for *edges a MultiGraph (see current code). To resolve your problem, you only need to delete the first two lines of your .net-file.
I've been trying to adapt Andrew Gaidus shapefile reading routine for my needs. The Jupyter Notebook I'm using acts like it partitioned the disk of my MacBook Pro so I can't read or write to disk. Gaidus has a good procedure for avoiding using disk, but is written for prior version of Python.
Here is the code:
dls = "https://github.com/ItsMeLarry/Coursera_Capstone/raw/master/tl_2010_25009_tract00%202.zip"
lynntracts = ZipFile(io.BytesIO(urllib.request.urlopen(dls).read()))
print("Done")
filenames = [y for y in sorted(lynntracts.namelist()) for ending in ['dbf', 'prj', 'shp', 'shx'] if y.endswith(ending)]
#For some reason, I get 8, instead of 4, filenames. The first 4 start with __MACOSX. I get rid of those. The problem I
#have with the 'TypeError' occurs no matter which set of 4 files I use.
print(filenames[0], 'Example of the 4 files that I remove in the for loop')
for i in range(0,4):
del filenames[0]
print(filenames)
dbf, prj, shp, shx = [io.StringIO(ZipFile.read(filename)) for filename in filenames]
r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)
print(r.numRecords)
Opening with io.BytesIO cured the prior problem of byte/str collision. Now see the TypeError for the ZipFile.read. I get the same error if I use io.BytesIO when calling it. Here is error output followed by error info:
Done
__MACOSX/tl_2010_25009_tract00/._tl_2010_25009_tract00.dbf Example of the 4 files that I remove in the for loop
['tl_2010_25009_tract00/tl_2010_25009_tract00.dbf', 'tl_2010_25009_tract00/tl_2010_25009_tract00.prj', 'tl_2010_25009_tract00/tl_2010_25009_tract00.shp', 'tl_2010_25009_tract00/tl_2010_25009_tract00.shx']
TypeError Traceback (most recent call last)
in ()
12 del filenames[0]
13 print(filenames)
---> 14 dbf, prj, shp, shx = [io.StringIO(ZipFile.read(filename)) for filename in filenames]
15 r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)
16 print(r.numRecords)
in (.0)
12 del filenames[0]
13 print(filenames)
---> 14 dbf, prj, shp, shx = [io.StringIO(ZipFile.read(filename)) for filename in filenames]
15 r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)
16 print(r.numRecords)
TypeError: read() missing 1 required positional argument: 'name'
Clearly, I am a beginner. I've come up empty handed trying to research this. Where do I go? What do I need to understand here? Thanks
I am trying to list nearby venues using get Nearby venues that are previously defined, and every line worked fine and then I cannot label properly nearby venues using Foursquare although its working fine ( I have to reset my Id and Secret as it just stop working). Im using Python 3.5 at Jupyter Notebook
What Im doing wrong? Thank you!!
BT_venues=getNearbyVenues(names=BT_df['Sector'],
latitudes=BT_df['Latitude'],
longitudes=BT_df['Longitude']
)
-----------------------------------------------------------------------
----
KeyError Traceback (most recent call
last)
<ipython-input-99-563e09cdcab5> in <module>()
1 BT_venues=getNearbyVenues(names=BT_df['Sector'],
2 latitudes=BT_df['Latitude'],
----> 3 longitudes=BT_df['Longitude']
4 )
<ipython-input-93-cfc09962ae0b> in getNearbyVenues(names, latitudes,
longitudes, radius)
18
19 # make the GET request
---> 20 results = requests.get(url).json()['response']
['groups'][0]
['items']
21
22 # return only relevant information for each nearby venue
KeyError: 'groups'
As for groups this was the code
venues = res['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# columns only
filtered_columns = ['venue.name', 'venue.categories',
'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# only one category per a row
nearby_venues['venue.categories'] =
nearby_venues.apply(get_category_type,
axis=1)
# columns cleaning up
nearby_venues.columns = [col.split(".")[-1] for col in
nearby_venues.columns]
nearby_venues.head()
Check response['meta'], you may have exceeded your quota.
If you need an instant resolution, create a new foursquare account. Then create new application and use your new client id and secret to call api