how to paramaterise an existing exchange with brightway - brightway

I want to paramaterize exchanges of an existing brightway activity. in the example I've found the formula is defined for a new_exchange, can we do it for an existing one?
a practical example could be to redefine the fuel consumption as function of higher heating value and efficiency.
ex=[act for act in bw.Database('ei_34con') if 'natural gas' in act['name']
and 'condensing' in act['name']
and 'CH' in act['location']][0].copy()
ng_flow=[f for f in ex.technosphere() if ('natural gas' in f['name'])][0]
act_data=[{'name':'eff',
'database':ex['database'],
'code':ex['code'],
'amount':0.95,
'unit':''},
{'name':'HHV',
'database':ex['database'],
'code':ex['code'],
'amount':37,
'unit':'MJ/m3'}]
bw.parameters.new_activity_parameters(act_data, "my group")
I naively tried
ng_flow['formula']='1/eff/HHV'
bw.parameters.add_exchanges_to_group("my group", ex)
ActivityParameter.recalculate_exchanges("my group")
but parameters did not update the amount of the exchange.

You were quite close.
I reran your code and the line
bw.parameters.add_exchanges_to_group("my group", ex)
returns 0. This means no parameters were added.
However, if I save the exchange first:
ng_flow.save()
bw.parameters.add_exchanges_to_group("my group", ex)
returns 1, and
for exc in ex.technosphere():
if "natural gas" in exc['name']:
print(exc.amount, exc.input, exc.output)
Prints
0.028449502133712657 'market for natural gas, low pressure' (cubic meter, CH, None) 'heat production, natural gas, at boiler condensing modulating <100kW' (megajoule, CH, None)
Note that ng_flow.as_dict() does not show the updated value.

Related

Elifs conditions are not working in my program

This is my code that takes a number of codons. Codons are a group of three nucleotides, each coding for an Amino Acid
codon_sequence=[]
print("Enter no. of codons you want")
n=int(input())
for i in range(n):
codon=str(input())
codon_sequence.append(codon)
print(codon_sequence)
for i in range(n):
if(codon_sequence[i]=="UUU" or "UUC" or "TTT" or "TTC"):
print("Phe_")
elif(codon_sequence[i]=="UUA" or "UUG" or "CUU" or "CUC" or "CUG" or "CUA" or "TTA" or "TTG" or "CTT" or "CTC" or "CTG" or "CTA"):
print("Leu_")
elif(codon_sequence[i]=="UCU" or "UCC" or "UCG" or "UCA" or "AGU" or "AGC" or "TCT" or "TCC" or "TCG" or "TCA" or "AGT" or "AGC"):
print("Ser_")
elif(codon_sequence[i]=="UAU" or "UAC" or "TAT" or "TAC"):
print("Tyr_")
elif(codon_sequence[i]=="UGU" or "UGC" or "TGT" or "TGC"):
print("Cys_")
elif(codon_sequence[i]=="UGG" or "TGG"):
print("Trp_")
elif(codon_sequence[i]=="CCU" or "CCC" or "CCA" or "CCG" or "CCT"):
print("Pro_")
elif(codon_sequence[i]=="CGU" or "CGC" or "CGA" or "CGG" or "AGA" or "AGG" or "CGT"):
print("Arg_")
elif(codon_sequence[i]=="CAU" or "CAC" or "CAT"):
print("His_")
elif(codon_sequence[i]=="CAA" or "CAG"):
print("Gln_")
elif(codon_sequence[i]=="AUU" or "AUC" or "AUA" or "ATT" or "ATC" or "ATA"):
print("Ile_")
elif(codon_sequence[i]=="AUG"):
print("Met_")
elif(codon_sequence[i]=="ACU" or "ACC" or "ACA" or "ACG" or "ACT"):
print("Thr_")
elif(codon_sequence[i]=="GUU" or "GUC" or "GUA" or "GUG" or "GTT" or "GTC" or "GTA" or "GTG"):
print("Val_")
elif(codon_sequence[i]=="GCU" or "GCC" or "GCA" or "GCG" or "GCT"):
print("Ala_")
elif(codon_sequence[i]=="GGU" or "GGC" or "GGA" or "GGG" or "GGT"):
print("Gly_")
elif(codon_sequence[i]=="GAU" or "GAC" or "GAT"):
print("Asp_")
elif(codon_sequence[i]=="GAA" or "GAG"):
print("Glu_")
elif(codon_sequence[i]=="AAU" or "AAC" or "AAT"):
print("Asn_")
elif(codon_sequence[i]=="AAA" or "AAG"):
print("Lys_")
else:
print("Stop_")
This is however, giving me only 'Phe_' as result, and ignores all other conditions
Reason why your code is not hitting the elif blocks
Your if and elif blocks should look like this.
It should check if codon_sequence[i] is equal to a string of interest.
if(codon_sequence[i]=="UUU" or codon_sequence[i]=="UUC" or codon_sequence[i]=="TTT" or codon_sequence[i]=="TTC"):
Instead you have an or condition against just plain strings like UUC.
This will result in the first if condition always being True.
Thereby you will never hit the elif block.
Also a better way of writing the if statement would be:
if codon_sequence[i] in ["UUU", "UUC", "TTT", "TTC"]:
print("Phe_")
This would be a great candidate for a switch statement, but as the previous answer mentioned you can't put an "or" between each string like you're doing.

AWARE method in ecoinvent through brightway2

i'm trying to extract all water inputs to several processes using brightway or ab, but i'm having troubles getting those values from the inventory_matrix. I need to get quantity of water consumed and origin of the water (surface, well, unspecified, tap water) along with its geography. mI'm using ecoinvent 3.7.1
One way i thought i could do that is create an impact assessment method with CFs for those compartments (each of those will be 1s), then apply it to the processes and analyse the elementary flows contributions.
I'm not sure though i can get the geography like this.
EDITED MOST OF THE QUESTION FOR CLARITY
Seems like i was trying to reinvent the wheel! My goal is to implement AWARE method and apply it to quite a few processes. the best result would be to use AWARE through activity-browser so i can use all of its functionalities which are very time-saving for me.
i just saw that there is brightway2-regional and bw2_aware that implement the aforementioned method.
So i'm now trying to install the packages in my brightway2 conda environment.
Managed to get bw2regional through conda but i cant manage to install bw2_aware if not through pip.
managed to install bw2_aware by using the wheel file and pip install --no-deps and then tweaking a line in the source code for fiona import, now i'm getting errors when running
bwr.bw2regionalsetup()
bw2_aware.import_aware()
SSLError: HTTPSConnectionPool(host='pandarus.brightwaylca.org', port=443): Max retries exceeded with url: / (Caused by SSLError(CertificateError("hostname 'pandarus.brightwaylca.org' doesn't match either of '*.parkingcrew.net', 'parkingcrew.net'",),))
Now i'm trying to understand if i can apply this to ecoinvent, and how. i'm really unsure if i can add the different geographies to the elementary flows in the database, so that i can correctly calculte aware in a lca.
I already saw that importing AWARE allows to choose it as a impact cathegory in activity-browser, though i cannot see the geographies in the CFS shown in the Charaterization Factors tab.
So i then tried to calculate an LCA with AB using AWARE method and 2 sample processes:
diesel, burned in agricultural machinery | diesel, burned in agricultural machinery | GLO | megajoule | ECOINVENT3.7.1
electricity, high voltage | market for electricity, high voltage | IT | kilowatt hour | ECOINVENT3.7.1
and i get this result (first is Agricultural, the other is non Agricultural):
diesel, burned in agricultural machinery | GLO
0.014757994762941706 0.00654978730728395
market for electricity, high voltage | IT 0.285207979534988 0.12657895834095712
I wonder if this is correct.
Your intuition is correct - constructing an LCIA method is an easy way to do this. This is because there are unspoken assumptions behind these flows - they will be mostly positive numbers, but some represent consumption, while others represent release.
Here is an example using Brightway 2.5, it would need to be adapted for version 2:
import bw2data as bd
import bw2calc as bc
import numpy as np
bd.projects.set_current("ecoinvent 3.7.1")
ecoinvent = bd.Database("ecoinvent 3.7.1")
beet = ecoinvent.search("beet")[1]
water_method = bd.Method(("Water", "raw"))
water_method.register()
water_method.write([(x, -1 if x['categories'][0] == 'natural resource' else 1)
for x in bd.Database("biosphere3")
if x['name'].startswith('Water')])
demand, data_objs, _ = bd.prepare_lca_inputs(demand={beet: 1}, method=("Water", "raw"))
lca = bc.LCA(demand=demand, data_objs=data_objs)
lca.lci()
lca.lcia()
coo = lca.characterized_inventory.tocoo()
results = sorted(zip(np.abs(coo.data), coo.data, coo.row, coo.col), reverse=True)
for a, b, c, d in results[:10]:
print("{:.6f}\t{}\n\t\t{}".format(
float(b),
bd.get_activity(lca.dicts.biosphere.reversed[c]),
bd.get_activity(lca.dicts.activity.reversed[d])
))
With the result:
0.009945 'Water' (cubic meter, None, ('water',))
'electricity production, hydro, run-of-river' (kilowatt hour, CH, None)
-0.009945 'Water, turbine use, unspecified natural origin' (cubic meter, None, ('natural resource', 'in water'))
'electricity production, hydro, run-of-river' (kilowatt hour, CH, None)
0.009514 'Water' (cubic meter, None, ('water',))
'electricity production, hydro, run-of-river' (kilowatt hour, RoW, None)
-0.009514 'Water, turbine use, unspecified natural origin' (cubic meter, None, ('natural resource', 'in water'))
'electricity production, hydro, run-of-river' (kilowatt hour, RoW, None)
0.007264 'Water' (cubic meter, None, ('water',))
'electricity production, hydro, run-of-river' (kilowatt hour, FR, None)
-0.007264 'Water, turbine use, unspecified natural origin' (cubic meter, None, ('natural resource', 'in water'))
'electricity production, hydro, run-of-river' (kilowatt hour, FR, None)
-0.003371 'Water, river' (cubic meter, None, ('natural resource', 'in water'))
'irrigation, sprinkler' (cubic meter, CH, None)
0.003069 'Water' (cubic meter, None, ('water', 'ground-'))
'sugar beet production' (kilogram, CH, None)
0.001935 'Water' (cubic meter, None, ('air',))
'sugar beet production' (kilogram, CH, None)
0.001440 'Water' (cubic meter, None, ('water',))
'electricity production, hydro, run-of-river' (kilowatt hour, CN-SC, None)
There isn't a way to use regionalized calculations in the activity browser, and the AWARE method (incorrectly) relies on an external web service which is not strictly necessary. So the short answer is that this isn't possible right now, but as you are the second person to ask about this in a week, we need to get it working ASAP.

Confusion About Implementing LeafSystem With Vector Output Port Correctly

I'm a student teaching myself Drake, specifically pydrake with Dr. Russ Tedrake's excellent Underactuated Robotics course. I am trying to write a combined energy shaping and lqr controller for keeping a cartpole system balanced upright. I based the diagram on the cartpole example found in Chapter 3 of Underactuated Robotics [http://underactuated.mit.edu/acrobot.html], and the SwingUpAndBalanceController on Chapter 2: [http://underactuated.mit.edu/pend.html].
I have found that due to my use of the cart_pole.sdf model I have to create an abstract input port due receive FramePoseVector from the cart_pole.get_output_port(0). From there I know that I have to create a control signal output of type BasicVector to feed into a Saturation block before feeding into the cartpole's actuation port.
The problem I'm encountering right now is that I'm not sure how to get the system's current state data in the DeclareVectorOutputPort's callback function. I was under the assumption I would use the LeafContext parameter in the callback function, OutputControlSignal, obtaining the BasicVector continuous state vector. However, this resulting vector, x_bar is always NaN. Out of desperation (and testing to make sure the rest of my program worked) I set x_bar to the controller's initialization cart_pole_context and have found that the simulation runs with a control signal of 0.0 (as expected). I can also set output to 100 and the cartpole simulation just flies off into endless space (as expected).
TL;DR: What is the proper way to obtain the continuous state vector in a custom controller extending LeafSystem with a DeclareVectorOutputPort?
Thank you for any help! I really appreciate it :) I've been teaching myself so it's been a little arduous haha.
# Combined Energy Shaping (SwingUp) and LQR (Balance) Controller
# with a simple state machine
class SwingUpAndBalanceController(LeafSystem):
def __init__(self, cart_pole, cart_pole_context, input_i, ouput_i, Q, R, x_star):
LeafSystem.__init__(self)
self.DeclareAbstractInputPort("state_input", AbstractValue.Make(FramePoseVector()))
self.DeclareVectorOutputPort("control_signal", BasicVector(1),
self.OutputControlSignal)
(self.K, self.S) = BalancingLQRCtrlr(cart_pole, cart_pole_context,
input_i, ouput_i, Q, R, x_star).get_LQR_matrices()
(self.A, self.B, self.C, self.D) = BalancingLQRCtrlr(cart_pole, cart_pole_context,
input_i, ouput_i,
Q, R, x_star).get_lin_matrices()
self.energy_shaping = EnergyShapingCtrlr(cart_pole, x_star)
self.energy_shaping_context = self.energy_shaping.CreateDefaultContext()
self.cart_pole_context = cart_pole_context
def OutputControlSignal(self, context, output):
#xbar = copy(self.cart_pole_context.get_continuous_state_vector())
xbar = copy(context.get_continuous_state_vector())
xbar_ = np.array([xbar[0], xbar[1], xbar[2], xbar[3]])
xbar_[1] = wrap_to(xbar_[1], 0, 2.0*np.pi) - np.pi
# If x'Sx <= 2, then use LQR ctrlr. Cost-to-go J_star = x^T * S * x
threshold = np.array([2.0])
if (xbar_.dot(self.S.dot(xbar_)) < 2.0):
#output[:] = -self.K.dot(xbar_) # u = -Kx
output.set_value(-self.K.dot(xbar_))
else:
self.energy_shaping.get_input_port(0).FixValue(self.energy_shaping_context,
self.cart_pole_context.get_continuous_state_vector())
output_val = self.energy_shaping.get_output_port(0).Eval(self.energy_shaping_context)
output.set_value(output_val)
print(output)
Here are two things that might help:
If you want to get the state of the cart-pole from MultibodyPlant, you probably want to be connecting to the continuous_state output port, which gives you a normal vector instead of the abstract-type FramePoseVector. In that case, your call to get_input_port().Eval(context) should work just fine.
If you do really want to read the FramePoseVector, then you have to evaluate the input port slightly differently. You can find an example of that here.

unable to get rid of all emojis

I need help removing emojis. I looked at some other stackoverflow questions and this is what I am de but for some reason my code doesn't get rid of all the emojis
d= {'alexveachfashion': 'Fashion Style * Haute Couture * Wearable Tech * VR\n👓👜⌚👠\nSoundCloud is Live #alexveach\n👇New YouTube Episodes ▶️👇', 'andrewvng': 'Family | Fitness | Friends | Gym | Food', 'runvi.official': 'Accurate measurement via SMART insoles & real-time AI coaching. Improve your technique & BOOST your performance with every run.\nSoon on Kickstarter!', 'triing': 'Augmented Jewellery™️ • Montreal. Canada.', 'gedeanekenshima': 'Prof na Etec Albert Einstein, Mestranda em Automação e Controle de Processos, Engenheira de Controle e Automação, Técnica em Automação Industrial.', 'jetyourdaddy': '', 'lavonne_sun': '☄️🔭 ✨\n°●°。Visual Narrative\nA creative heart with a poetic soul.\n————————————\nPARSONS —— Design & Technology', 'taysearch': 'All the World’s Information At Your Fingertips. (Literally) Est. 1991🇺🇸 🎀#PrincessofSearch 🔎Sample 👇🏽 the Search Engine Here 🗽', 'hijewellery': 'Fine 3D printed jewellery for tech lovers #3dprintedjewelry #wearabletech #jewellery', 'yhanchristian': 'Estudante de Engenharia, Maker e viciado em café.', 'femka': 'Fashion Futurist + Fashion Tech Lab Founder #technoirlab + Fashion Designer / Parsons & CSM Grad / Obsessed with #fashiontech #future #cryptocurrency', 'sinhbisen': 'Creator, TRiiNG, augmented jewellery label ⭕️ Transhumanist ⭕️ Corporeal cartographer ⭕️', 'stellawearables': '#StellaWearables ✉️Info#StellaWearables.com Premium Wearable Technology That Monitors Personal Health & Environments ☀️🏝🏜🏔', 'ivoomi_india': 'We are the manufacturers of the most innovative technologies and user-friendly gadgets with a global presence.', 'bgutenschwager': "When it comes to life, it's all about the experience.\nGoogle Mapper 🗺\n360 Photographer 📷\nBrand Rep #QuickTutor", 'storiesofdesign': 'Putting stories at the heart of brands and businesses | Cornwall and London | #storiesofdesign', 'trume.jp': '草創期から国産ウオッチの製造に取り組み、挑戦を続けてきたエプソンが世界に放つ新ブランド「TRUME」(トゥルーム)。目指すのは、最先端技術でアナログウオッチを極めるブランド。', 'themarinesss': "I didn't choose the blog life, the blog life chose me | Aspiring Children's Book Author | www.slayathomemum.com", 'ayowearable': 'The world’s first light-based wearable that helps you sleep better, beat jet lag and have more energy! #goAYO Get yours at:', 'wearyourowntechs': 'Bringing you the latest trends, Current Products and Reviews of Wearable Technology. Discover how they can enhance your Life and Lifestyle', 'roxfordwatches': 'The Roxford | The most stylish and customizable fitness smartwatch. Tracks your steps/calories/dist/sleep. Comes with FOUR bands, and a travel case!', 'playertek': "Track your entire performance - every training session, every match. \nBecause the best players don't hide.", '_kate_hartman_': '', 'hmsmc10': 'Health & Wellness 🍎\nBoston, MA 🏙\nSuffolk MPA ‘17 🎓 \n.\nJust Strong Ambassador 🏋🏻\u200d♀️', 'gadgetxtreme': 'Dedicated to reviewing gadgets, technologies, internet products and breaking tech news. Follow us to see daily vblogs on all the disruptive tech..', 'freedom.journey.leader': '📍MN\n🍃Wife • Homeschooling Mom to 5 🐵 • D Y I lover 🔨 • Small town living in MN. 🌿 \n📧Ashleybp5#gmail.com \n#homeschool #bossmom #builder #momlife', 'arts_food_life': 'Life through my phone.', 'medgizmo': 'Wearable #tech: #health #healthcare #wellness #gadgets #apps. Images/links provided as information resource only; doesn’t mean we endorse referenced', 'sawearables': 'The home of wearable tech in South Africa!\n--> #WearableTech #WearableTechnology #FitnessTech Find your wearable #', 'shop.mercury': 'Changing the way you charge.⚡️\nGet exclusive product discounts, and help us reach our goal below!🔋', 'invisawear': 'PRE-ORDERS NOW AVAILABLE! Get yours 25% OFF here: #girlboss #wearabletech'}
for key in d:
print("---with emojis----")
print(d[key])
print("---emojis removed----")
x=''.join(c for c in d[key] if c <= '\uFFFF')
print(x)
output example
---with emojis----
📍MN
🍃Wife • Homeschooling Mom to 5 🐵 • D Y I lover 🔨 • Small town living in MN. 🌿
📧Ashleybp5#gmail.com
#homeschool #bossmom #builder #momlife
---emojis removed----
MN
Wife • Homeschooling Mom to 5 • D Y I lover • Small town living in MN.
Ashleybp5#gmail.com
#homeschool #bossmom #builder #momlife
---with emojis----
Changing the way you charge.⚡️
Get exclusive product discounts, and help us reach our goal below!🔋
---emojis removed----
Changing the way you charge.⚡️
Get exclusive product discounts, and help us reach our goal below!
There is no technical definition of what an "emoji" is. Various glyphs may be used to render printable characters, symbols, control characters and the like. What seems like an "emoji" to you may be part of normal script to others.
What you probably want to do is to look at the Unicode category of each character and filter out various categories. While this does not solve the "emoji"-definition-problem per se, you get much better control over what you are actually doing without removing, for example, literally all characters of languages spoken by 2/3 of the planet.
Instead of filtering out certain categories, you may filter everything except the lower- and uppercase letters (and numbers). However, be aware that ꙭ is not "the googly eyes emoji" but the CYRILLIC SMALL LETTER DOUBLE MONOCULAR O, which is a normal lowercase letter to millions of people.
For example:
import unicodedata
s = "🍃Wife • Homeschooling Mom to 5 🐵 • D Y I lover 🔨 • Small town living in MN. 🌿"
# Just filter category "symbol"
t = ''.join(c for c in s if unicodedata.category(c) not in ('So', ))
print(t)
...results in
Wife • Homeschooling Mom to 5 • D Y I lover • Small town living in MN.
This may not be emoji-free enough, yet the • is technically a form of punctuation. So filter this as well
# Filter symbols and punctuations. You may want 'Cc' as well,
# to get rid of control characters. Beware that newlines are a
# form of control-character.
t = ''.join(c for c in s if unicodedata.category(c) not in ('So', 'Po'))
print(t)
And you get
Wife Homeschooling Mom to 5 D Y I lover Small town living in MN

distributed Tensorflow tracking timestamps for synchronization operations

I am new to TensorFlow. Currently, I am trying to evaluate the performance of distributed TensorFlow using Inception model provided by TensorFlow team.
The thing I want is to generate timestamps for some critical operations in a Parameter Server - Worker architecture, so I can measure the bottleneck (the network lag due to parameter transfer/synchronization or parameter computation cost) on replicas for one iteration (batch).
I came up with the idea of adding a customized dummy py_func operator designated of printing timestamps inside inception_distributed_train.py, with some control dependencies. Here are some pieces of code that I added:
def timer(s):
print ("-------- thread ID ", threading.current_thread().ident, ", ---- Process ID ----- ", getpid(), " ~~~~~~~~~~~~~~~ ", s, datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S.%f'))
return Falsedf
dummy1 = tf.py_func(timer, ["got gradients, before dequeues token "], tf.bool)
dummy2 = tf.py_func(timer, ["finished dequeueing the token "], tf.bool)
I modified
apply_gradients_op = opt.apply_gradients(grads, global_step=global_step)
with tf.control_dependencies([apply_gradients_op]):
train_op = tf.identity(total_loss, name='train_op')
into
with tf.control_dependencies([dummy1]):
apply_gradients_op = opt.apply_gradients(grads, global_step=global_step)
with tf.control_dependencies([apply_gradients_op]):
with tf.control_dependencies([dummy2]):
train_op = tf.identity(total_loss, name='train_op')
hoping to print the timestamps before evaluating the apply_gradient_op and after finishing evaluating the apply_gradient_op by enforcing node dependencies.
I did similar things inside sync_replicas_optimizer.apply_gradients, by adding two dummy print nodes before and after update_op:
dummy1 = py_func(timer, ["---------- before update_op "], tf_bool)
dummy2 = py_func(timer, ["---------- finished update_op "], tf_bool)
# sync_op will be assigned to the same device as the global step.
with ops.device(global_step.device), ops.name_scope(""):
with ops.control_dependencies([dummy1]):
update_op = self._opt.apply_gradients(aggregated_grads_and_vars, global_step)
# Clear all the gradients queues in case there are stale gradients.
clear_queue_ops = []
with ops.control_dependencies([update_op]):
with ops.control_dependencies([dummy2]):
for queue, dev in self._one_element_queue_list:
with ops.device(dev):
stale_grads = queue.dequeue_many(queue.size())
clear_queue_ops.append(stale_grads)
I understand that apply_gradient_op is the train_op returned by sync_replicas_optimizer.apply_gradient. And apply_gradient_op is the op to dequeue a token (global_step) from sync_queue managed by the chief worker using chief_queue_runner, so that replica can exit current batch and start a new batch.
In theory, apply_gradient_op should take some time as replica has to wait before it can dequeue the token (global_step) from sync_queue, but the print result for one replica I got, such as the time differences for executing apply_gradient_op is pretty short (~1/1000 sec) and sometimes the print output is indeterministic (especially for chief worker). Here is a snippet of the output on the workers (I am running 2 workers and 1 PS):
chief worker (worker 0) output
worker 1 output
My questions are:
1) I need to record the time TensorFlow takes to execute an op (such as train_op, apply_gradients_op, compute_gradients_op, etc.)
2) Is this the right direction to go, given my ultimate goal is to record the elapsed time for executing certain operations (such as the difference between the time a replica finishes computing gradients and the time it gets the global_step from sync_token)?
3) If this is not the way it should go, please guide me with some insights about the possible ways I could achieve my ultimate goal.
Thank you so much for reading my long long posts. as I have spent weeks working on this!

Resources