model uncertain emissions together with known emissions over time using brightway - brightway

I would like to understand how to model exchanges whose values are known to change sequentially (over time), together with exchanges that are uncertain. So for each period some will have known values, others uncertain (but independent). I think with the interfaces in brightway 2.5 this is possible. But I have not figured out how to do it.
A simple example, An activity emitting CO2 and CH4. Imagine we have measurements over time of the CO2 emissions but CH4 emissions follow some sort of random state that is independent of CH4. So for each period the CH4 emissions could take several values.
I think I know how to model the CO2 emissions using a dynamic_vector (see example below) but I don't know how I would model the CH4 emissions in this example. Perhaps with a dynamic_array ?
import brightway2 as bw
import numpy as np
import bw_processing as bwp
import bw2data as bd
import bw2calc
bw.projects.set_current('testing_things')
co2 = next(x for x in bd.Database("biosphere3")
if x['name'] == 'Carbon dioxide, fossil'
and x['categories'] == ('air',))
ch4 = next(x for x in bd.Database("biosphere3")
if x['name'] == 'Methane, fossil'
and x['categories'] == ('air',))
a_key = ("testdb", "a")
act_a_def = {
'name': 'a',
'unit': 'kilogram',
'exchanges': [{"input": co2.key, "type": "biosphere", "amount": 10},
{"input": a_key, "output":a_key,'type':'production','amount':1},
{"input": ch4.key, "type": "biosphere", "amount": 1},
],
}
db = bd.Database("testdb")
db.write(
{
a_key : act_a_def,
}
)
ipcc2013 = ('IPCC 2013', 'climate change', 'GWP 100a')
a = bw.get_activity(('testdb','a'))
lca = bw.LCA({a:1},ipcc2013)
lca.lci()
lca.lcia()
lca.score
lets assume I know the CO2 emissions because I have a sensor measuring them
co2_measurements = [10,11,9,10,8,11,12,13,8,12]
co2_interface = (np.array([c]) for c in co2_measurements)
indices_array=np.array([(co2.id, a.id),],
dtype=bwp.INDICES_DTYPE
)
flip_array=np.array([True])
hp = bwp.create_datapackage()
hp.add_dynamic_vector(
matrix="biosphere_matrix",
interface=co2_interface,
indices_array=indices_array,
flip_array=flip_array,
)
fu, data_objs, _ = bd.prepare_lca_inputs({a: 1}, method=ipcc2013)
lca = bw.LCA(fu, data_objs=data_objs + [hp])
lca.lci()
lca.lcia()
print(lca.score)
for _ in range(5):
next(lca)
print(lca.score)
18.700000762939453
20.700000762939453
19.700000762939453
21.700000762939453
18.700000762939453
17.700000762939453

I found a way to do it but it strikes me as awkward and probably not the way the dynamic calculations were thought to be used, as the datapackage is instantiated each time. In this solution the CH4 is the one with a time series and the CO2 the random one, but it does not really matter.
co2 = next(x for x in bwd.Database("biosphere3")
if x['name'] == 'Carbon dioxide, fossil'
and x['categories'] == ('air',))
ch4 = next(x for x in bwd.Database("biosphere3")
if x['name'] == 'Methane, fossil'
and x['categories'] == ('air',))
indices_co2 = np.array([(co2.id,act.id)],dtype=bwp.INDICES_DTYPE)
flip_array_co2 = np.array([False] * len(indices_co2))
indices_ch4 = np.array([(ch4.id,act.id)],dtype=bwp.INDICES_DTYPE)
flip_array_ch4 = np.array([False] * len(indices_ch4))
ch4_measurements = [np.array([r/10]) for r in range(5,100,1)]
class Interface_timeseries:
def __init__ (self, ts_data):
self.ts = ts_data
self.idx = 0
def __iter__ (self):
return self
def __next__ (self):
try:
item = self.ts[self.idx]
except IndexError:
raise StopIteration()
self.idx += 1
return item
def init_dynamic(act,lcia_method,co2_vector,ch4_vector):
dynamic_ghg = bwp.create_datapackage(combinatorial=False,sequential=False)
# CO2 random emissions
dynamic_ghg.add_dynamic_vector(
matrix = co2_vector['matrix'],
indices_array = co2_vector['indices_array'],
flip_array = co2_vector['flip_array'],
interface = co2_vector['interface'])
# CH4 measurements
dynamic_ghg.add_dynamic_vector(
matrix = ch4_vector['matrix'],
indices_array = ch4_vector['indices_array'],
flip_array = ch4_vector['flip_array'],
interface = ch4_vector['interface'],)
fu, data_objs , _ = bwd.prepare_lca_inputs({act:1},method=lcia_method)
lca_dynamic = bwc.LCA(fu, data_objs= data_objs +[dynamic_ghg])
lca_dynamic.lci()
lca_dynamic.lcia()
return(lca_dynamic)
iteration_results = []
for it in range(10):
co2_vector = {}
co2_vector['matrix'] = 'biosphere_matrix'
co2_vector['indices_array'] = indices_co2
co2_vector['flip_array'] = flip_array_co2
co2_vector['interface'] = (np.array([row]) for row in np.random.normal(size=100))
ch4_vector = {}
ch4_vector['matrix'] = 'biosphere_matrix'
ch4_vector['indices_array'] = indices_ch4
ch4_vector['flip_array'] = flip_array_ch4
ch4_vector['interface'] = Interface_timeseries(ch4_measurements)
dlca = init_dynamic(act,ipcc2013,co2_vector,ch4_vector)
scores = []
for _ in range(len(ch4_measurements)):
try:
next(dlca)
scores.append(dlca.score)
except StopIteration as si:
print('sequence completed')
break
iteration_results.append(scores)
Open to suggestions on how to do it better

Related

pyspark modify class attributes using spark.sql.rdd.foreach()

The main task is to connect Hive and read data using spark rdd.
I have tried the code below. Connection and reading are both successful, but when I want to modify the value of self.jobUserProfile, I failed. Then I print this value in three positions(masking in #1,#2 and #3). In the first position, the value is valid, but in the second and third position, the dict is empty. It seems that the modification has not been assigned into the class attribute.
I have tried response = spark.sql('select userid, logtime from hive.dwd_log_login_i_d limit 10').collect() and iterate this dataframe, but when the data volume is too large, the performance may decline.
When I change response.rdd.foreach(lambda x: self.readLoginFunction(x)) to response.rdd.map(lambda x: self.readLoginFunction(x)), the target value in three position are all empty.
I'm a newbie in spark. Any advice could be helpful. Thanks in advance.
from analysis.common.db.hive.connectHive import *
import collections
class OperateHive():
def __init__(self):
self.jobUserProfile = collections.defaultdict(dict)
def readLoginFunction(self, e):
dic = collections.defaultdict()
dic['userid'] = e[0]
dic['logtime'] = e[1]
self.jobUserProfile[e[0]] = dic
print(self.jobUserProfile) #1
def readLogin(self, spark):
response = spark.sql('select userid, logtime from hive.dwd_log_login_i_d limit 10')
response.rdd.foreach(lambda x: self.readLoginFunction(x))
print(self.jobUserProfile) #2
if __name__ == '__main__':
spark = connectHive(['conf/hdfs-site.xml', 'conf/hive-site.xml'], 'utf-8')
operateHive = OperateHive()
operateHive.readLogin(spark)
print(operateHive.jobUserProfile) #3
Finally the code below works.
from analysis.common.db.hive.connectHive import *
import collections
class OperateHive():
def readLoginFunction(self, e,jobUserProfile, devAppProfile):
dic = collections.defaultdict()
dic['userid'] = e[0]
dic['logtime'] = e[1]
jobUserProfile[e[0]] = dic
devAppProfile[e[0]] = dic
print(jobUserProfile)
return jobUserProfile, devAppProfile
def readLogin(self, spark, jobUserProfile,devAppProfile):
response = spark.sql('select userid, logtime from hive.dwd_log_login_i_d limit 10')
rdd1 = response.rdd.map(lambda x: self.readLoginFunction(x, jobUserProfile, devAppProfile))
return rdd1.top(1)[0][0]
if __name__ == '__main__':
spark = connectHive(['conf/hdfs-site.xml', 'conf/hive-site.xml'], 'utf-8')
jobUserProfile = collections.defaultdict(dict)
devAppProfile = collections.defaultdict(dict)
operateHive = OperateHive()
jobUserProfile = operateHive.readLogin(spark, jobUserProfile, devAppProfile)
print(jobUserProfile)
But when I remove devAppProfile, the code show like below:
from analysis.common.db.hive.connectHive import *
import collections
class OperateHive():
def readLoginFunction(self, e,jobUserProfile, devAppProfile):
dic = collections.defaultdict()
dic['userid'] = e[0]
dic['logtime'] = e[1]
jobUserProfile[e[0]] = dic
devAppProfile[e[0]] = dic
print(jobUserProfile)
return jobUserProfile
def readLogin(self, spark, jobUserProfile,devAppProfile):
response = spark.sql('select userid, logtime from hive.dwd_log_login_i_d limit 10')
response.rdd.map(lambda x: self.readLoginFunction(x, jobUserProfile, devAppProfile))
if __name__ == '__main__':
spark = connectHive(['conf/hdfs-site.xml', 'conf/hive-site.xml'], 'utf-8')
jobUserProfile = collections.defaultdict(dict)
devAppProfile = collections.defaultdict(dict)
operateHive = OperateHive()
operateHive.readLogin(spark, jobUserProfile, devAppProfile)
The rdd.map() won't work as there is no print in print(jobUserProfile).
Then I change the code like below, which works again.
from analysis.common.db.hive.connectHive import *
import collections
class OperateHive():
def readLoginFunction(self, e,jobUserProfile, devAppProfile):
dic = collections.defaultdict()
dic['userid'] = e[0]
dic['logtime'] = e[1]
jobUserProfile[e[0]] = dic
devAppProfile[e[0]] = dic
print(jobUserProfile)
return jobUserProfile
def readLogin(self, spark, jobUserProfile,devAppProfile):
response = spark.sql('select userid, logtime from hive.dwd_log_login_i_d limit 10')
rdd1 = response.rdd.map(lambda x: self.readLoginFunction(x, jobUserProfile, devAppProfile))
return rdd1.collect()[-1]
if __name__ == '__main__':
spark = connectHive(['conf/hdfs-site.xml', 'conf/hive-site.xml'], 'utf-8')
jobUserProfile = collections.defaultdict(dict)
devAppProfile = collections.defaultdict(dict)
operateHive = OperateHive()
jobUserProfile = operateHive.readLogin(spark, jobUserProfile, devAppProfile)
print(jobUserProfile)
The problem on the post is about closure. But I don't work out why the three versions on the answer work differently.

Python 3 Multiprocessing and openCV problem with dictionary sharing between processor

I would like to use multiprocessing to compute the SIFT extraction and SIFT matching for object detection.
For now, I have a problem with the return value of the function that does not insert data in the dictionary.
I'm using Manager class and image that are open inside the function. But does not work.
Finally, my idea is:
Computer the keypoint for every reference image, use this keypoint as a parameter of a second function that compares and match with the keypoint and descriptors of the test image.
My code is:
# %% Import Section
import cv2
import numpy as np
from matplotlib import pyplot as plt
import os
from datetime import datetime
from multiprocessing import Process, cpu_count, Manager, Lock
import argparse
# %% path section
tests_path = 'TestImages/'
references_path = 'ReferenceImages2/'
result_path = 'ResultParametrizer/'
#%% Number of processor
cpus = cpu_count()
# %% parameter section
eps = 1e-7
useTwo = False # using the m and n keypoint better with False
# good point parameters
distanca_coefficient = 0.75
# gms parameter
gms_thresholdFactor = 3
gms_withRotation = True
gms_withScale = True
# flann parameter
flann_trees = 5
flann_checks = 50
#%% Locker
lock = Lock()
# %% function definition
def keypointToDictionaries(keypoint):
x, y = keypoint.pt
pt = float(x), float(y)
angle = float(keypoint.angle) if keypoint.angle is not None else None
size = float(keypoint.size) if keypoint.size is not None else None
response = float(keypoint.response) if keypoint.response is not None else None
class_id = int(keypoint.class_id) if keypoint.class_id is not None else None
octave = int(keypoint.octave) if keypoint.octave is not None else None
return {
'point': pt,
'angle': angle,
'size': size,
'response': response,
'class_id': class_id,
'octave': octave
}
def dictionariesToKeypoint(dictionary):
kp = cv2.KeyPoint()
kp.pt = dictionary['pt']
kp.angle = dictionary['angle']
kp.size = dictionary['size']
kp.response = dictionary['response']
kp.octave = dictionary['octave']
kp.class_id = dictionary['class_id']
return kp
def rootSIFT(dictionary, image_name, image_path,eps=eps):
# SIFT init
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
sift = cv2.xfeatures2d.SIFT_create()
keypoints, descriptors = sift.detectAndCompute(image, None)
descriptors /= (descriptors.sum(axis=1, keepdims=True) + eps)
descriptors = np.sqrt(descriptors)
print('Finito di calcolare, PID: ', os.getpid())
lock.acquire()
dictionary[image_name]['keypoints'] = keypoints
dictionary[image_name]['descriptors'] = descriptors
lock.release()
def featureMatching(reference_image, reference_descriptors, reference_keypoints, test_image, test_descriptors,
test_keypoints, flann_trees=flann_trees, flann_checks=flann_checks):
# FLANN parameter
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=flann_trees)
search_params = dict(checks=flann_checks) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params, search_params)
flann_matches = flann.knnMatch(reference_descriptors, test_descriptors, k=2)
matches_copy = []
for i, (m, n) in enumerate(flann_matches):
if m.distance < distanca_coefficient * n.distance:
matches_copy.append(m)
gsm_matches = cv2.xfeatures2d.matchGMS(reference_image.shape, test_image.shape, keypoints1=reference_keypoints,
keypoints2=test_keypoints, matches1to2=matches_copy,
withRotation=gms_withRotation, withScale=gms_withScale,
thresholdFactor=gms_thresholdFactor)
#%% Starting reference list file creation
reference_init = datetime.now()
print('Start reference file list creation')
reference_image_process_list = []
manager = Manager()
reference_image_dictionary = manager.dict()
reference_image_list = manager.list()
for root, directories, files in os.walk(references_path):
for file in files:
if file.endswith('.DS_Store'):
continue
reference_image_path = os.path.join(root, file)
reference_name = file.split('.')[0]
image = cv2.imread(reference_image_path, cv2.IMREAD_GRAYSCALE)
reference_image_dictionary[reference_name] = {
'image': image,
'keypoints': None,
'descriptors': None
}
proc = Process(target=rootSIFT, args=(reference_image_list, reference_name, reference_image_path))
reference_image_process_list.append(proc)
proc.start()
for proc in reference_image_process_list:
proc.join()
reference_end = datetime.now()
reference_time = reference_end - reference_init
print('End reference file list creation, time required: ', reference_time)
I faced pretty much the same error. It seems that the code hangs at detectAndCompute in my case, not when creating the dictionary. For some reason, sift feature extraction is not multi-processing safe (to my understanding, it is the case in Macs but I am not totally sure.)
I found this in a github thread. Many people say it works but I couldn't get it worked. (Edit: I tried this later which works fine)
Instead I used multithreading which is pretty much the same code and works perfectly. Of course you need to take multithreading vs multiprocessing into account

Bokeh – ColumnDataSource not updating whiskered-plot

I’m having issues with the following code (I’ve cut out large pieces but I can add them back in – these seemed like the important parts). In my main code, I set up a plot (“sectionizePlot”) which is a simple variation on another whiskered-plot
I’m looking to update them on the fly. In the same script, I’m using a heatmap (“ModifiedGenericHeatMap”) which updates fine.
Any ideas how I might update my whiskered-plot? Updating the ColumnDataSource doesn’t seem to work (which makes sense). I’m guessing that I am running into issues with adding each circle/point individually onto the plot.
One idea would be to clear the plot each time and manually add the points onto the plot, but it would need to be cleared each time, which I’m unsure of how to do.
Any help would be appreciated. I’m just a lowly Scientist trying to utilize Bokeh in Pharma research.
def ModifiedgenericHeatMap(source, maxPct):
colors = ["#75968f", "#a5bab7", "#c9d9d3", "#e2e2e2", "#dfccce", "#ddb7b1", "#cc7878", "#933b41", "#550b1d"]
#mapper = LinearColorMapper(palette=colors, low=0, high=data['count'].max())
mapper = LinearColorMapper(palette=colors, low=0, high=maxPct)
TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom"
globalDist = figure(title="derp",
x_range=cols, y_range=list(reversed(rows)),
x_axis_location="above", plot_width=1000, plot_height=400,
tools=TOOLS, toolbar_location='below')
globalDist.grid.grid_line_color = None
globalDist.axis.axis_line_color = None
globalDist.axis.major_tick_line_color = None
globalDist.axis.major_label_text_font_size = "5pt"
globalDist.axis.major_label_standoff = 0
globalDist.xaxis.major_label_orientation = pi / 3
globalDist.rect(x="cols", y="rows", width=1, height=1,
source=source,
fill_color={'field': 'count', 'transform': mapper},
line_color=None)
color_bar = ColorBar(color_mapper=mapper, major_label_text_font_size="5pt",
ticker=BasicTicker(desired_num_ticks=len(colors)),
# fix this via using a formatter with accounts for
formatter=PrintfTickFormatter(format="%d%%"),
label_standoff=6, border_line_color=None, location=(0, 0))
text_props = {"source": source, "text_align": "left", "text_baseline": "middle"}
x = dodge("cols", -0.4, range=globalDist.x_range)
r = globalDist.text(x=x, y=dodge("rows", 0.3, range=globalDist.y_range), text="count", **text_props)
r.glyph.text_font_size = "8pt"
globalDist.add_layout(color_bar, 'right')
globalDist.select_one(HoverTool).tooltips = [
('Well:', '#rows #cols'),
('Count:', '#count'),
]
return globalDist
def sectionizePlot(source, source_error, type, base):
print("sectionize plot created with typ: " + type)
colors = []
for x in range(0, len(base)):
colors.append(getRandomColor())
title = type + "-wise Intensity Distribution"
p = figure(plot_width=600, plot_height=300, title=title)
p.add_layout(
Whisker(source=source_error, base="base", upper="upper", lower="lower"))
for i, sec in enumerate(source.data['base']):
p.circle(x=source_error.data["base"][i], y=sec, color=colors[i])
p.xaxis.axis_label = type
p.yaxis.axis_label = "Intensity"
if (type.split()[-1] == "Row"):
print("hit a row")
conv = dict(enumerate(list("nABCDEFGHIJKLMNOP")))
conv.pop(0)
p.xaxis.major_label_overrides = conv
p.xaxis.ticker = SingleIntervalTicker(interval=1)
return p
famData = dict()
e1FractSource = ColumnDataSource(dict(count=[], cols=[], rows=[], index=[]))
e1Fract = ModifiedgenericHeatMap(e1FractSource, 100)
rowSectTotSource = ColumnDataSource(data=dict(base=[]))
rowSectTotSource_error = ColumnDataSource(data=dict(base=[], lower=[], upper=[]))
rowSectPlot_tot = sectionizePlot(rowSectTotSource,rowSectTotSource_error, "eSum Row", rowBase)
def update(selected=None):
global famData
famData = getFAMData(file_source_dt1, True)
global e1Stack
e1Fract = (famData['e1Sub'] / famData['eSum']) * 100
e1Stack = e1Fract.stack(dropna=False).reset_index()
e1Stack.columns = ["rows", "cols", "count"]
e1Stack['count'] = e1Stack['count'].apply(lambda x: round(x, 1))
e1FractSource.data = dict(cols=e1Stack["cols"], count=(e1Stack["count"]),
rows=e1Stack["rows"], index=e1Stack.index.values, codon=wells, )
rowData, colData = sectionize(famData['eSum'], rows, cols)
rowData_lower, rowData_upper = getLowerUpper(rowData)
rowBase = list(range(1, 17))
rowSectTotSource_error.data = dict(base=rowBase, lower=rowData_lower, upper=rowData_upper, )
rowSectTotSource.data = dict(base=rowData)
rowSectPlot_tot.title.text = "plot changed in update"
layout = column(e1FractSource, rowSectPlot_tot)
update()
curdoc().add_root(layout)
curdoc().title = "Specs"
print("ok")

How to get frequencies of topics of NMF in sklearn

I am now using NMF to generate topics. My code is shown below. However, I do not know how to get the frequency of each topic. Does anyone that can help me? Thank you!
def fit_tfidf(documents):
tfidf = TfidfVectorizer(input = 'content', stop_words = 'english',
use_idf = True, ngram_range = NGRAM_RANGE,lowercase = True, max_features = MAX_FEATURES, min_df = 1 )
tfidf_matrix = tfidf.fit_transform(documents.values).toarray()
tfidf_feature_names = np.array(tfidf.get_feature_names())
tfidf_reverse_lookup = {word: idx for idx, word in enumerate(tfidf_feature_names)}
return tfidf_matrix, tfidf_reverse_lookup, tfidf_feature_names
def vectorization(documments):
if VECTORIZER == 'tfidf':
vec_matrix, vec_reverse_lookup, vec_feature_names = fit_tfidf(documents)
if VECTORIZER == 'bow':
vec_matrix, vec_reverse_lookup, vec_feature_names = fit_bow(documents)
return vec_matrix, vec_reverse_lookup, vec_feature_names
def nmf_model(vec_matrix, vec_reverse_lookup, vec_feature_names, NUM_TOPICS):
topic_words = []
nmf = NMF(n_components = NUM_TOPICS, random_state=3).fit(vec_matrix)
for topic in nmf.components_:
word_idx = np.argsort(topic)[::-1][0:N_TOPIC_WORDS]
topic_words.append([vec_feature_names[i] for i in word_idx])
return topic_words
If you mean the frequency of each topic inside each documents, then:
H = nmf.fit_transform(vec_matrix)
H is a matrix of shape (n_documents, n_topics). Each row represents a document vector (in the topic space). In this vector you find the weight that each topic has (which translates as the topic importance).

MullionType errors in Revit API/Dynamo script

I’m working on a Python script that takes a set of input lines and assigns a mullion to the corresponding gridline that they intersect. However, I’m getting a strange error:
that I don’t know how to correct towards the end of the script. Python is telling me that it expected a MullionType and got a Family Type (see image). I’m using a modified version of Spring Nodes’ Collector.WallTypes that collects Mullion Types instead but the output of the node is a Family Type, which the script won’t accept. Any idea how to get the Mullion Type to feed into the final Python node?
SpringNodes script:
#Copyright(c) 2016, Dimitar Venkov
# #5devene, dimitar.ven#gmail.com
import clr
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
doc = DocumentManager.Instance.CurrentDBDocument
clr.AddReference("RevitAPI")
from Autodesk.Revit.DB import *
clr.AddReference("RevitNodes")
import Revit
clr.ImportExtensions(Revit.Elements)
def tolist(obj1):
if hasattr(obj1,"__iter__"): return obj1
else: return [obj1]
fn = tolist(IN[0])
fn = [str(n) for n in fn]
result, similar, names = [], [], []
fec = FilteredElementCollector(doc).OfClass(MullionType)
for i in fec:
n1 = Element.Name.__get__(i)
names.append(n1)
if any(fn1 == n1 for fn1 in fn):
result.append(i.ToDSType(True))
elif any(fn1.lower() in n1.lower() for fn1 in fn):
similar.append(i.ToDSType(True))
if len(result) > 0:
OUT = result,similar
if len(result) == 0 and len(similar) > 0:
OUT = "No exact match found. Check partial below:",similar
if len(result) == 0 and len(similar) == 0:
OUT = "No match found! Check names below:", names
The SpringNodes script outputs a Family Type, even though the collector is for Mullion Types (see above image)
Here's my script:
import clr
# Import RevitAPI
clr.AddReference("RevitAPI")
import Autodesk
from Autodesk.Revit.DB import *
# Import DocumentManager and TransactionManager
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
# Import ToDSType(bool) extension method
clr.AddReference("RevitNodes")
import Revit
clr.ImportExtensions(Revit.GeometryConversion)
from System import Array
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *
import math
doc = DocumentManager.Instance.CurrentDBDocument
app = DocumentManager.Instance.CurrentUIApplication.Application
walls = UnwrapElement(IN[0])
toggle = IN[1]
inputLine = IN[2]
mullionType = IN[3]
wallSrf = []
heights = []
finalPoints = []
directions = []
isPrimary = []
projectedCrvs = []
keySegments = []
keySegmentsGeom = []
gridSegments = []
gridSegmentsGeom = []
gridLines = []
gridLinesGeom = []
keyGridLines = []
keyGridLinesGeom = []
projectedGridlines = []
lineDirections = []
gridLineDirection = []
allTrueFalse = []
if toggle == True:
TransactionManager.Instance.EnsureInTransaction(doc)
for w, g in zip(walls,inputLine):
pointCoords = []
primary = []
## Get curtain wall element sketch line
originLine = Revit.GeometryConversion.RevitToProtoCurve.ToProtoType( w.Location.Curve, True )
originLineLength = w.Location.Curve.ApproximateLength
## Get curtain wall element height, loft to create surface
for p in w.Parameters:
if p.Definition.Name == 'Unconnected Height':
height = p.AsDouble()
topLine = originLine.Translate(0,0,height)
srfCurves = [originLine,topLine]
wallSrf = NurbsSurface.ByLoft(srfCurves)
## Get centerpoint of curve, determine whether it extends across entire gridline
projectedCrvCenterpoint = []
for d in g:
lineDirection = d.Direction.Normalized()
lineDirections.append(lineDirection)
curveProject= d.PullOntoSurface(wallSrf)
if abs(lineDirection.Z) == 1:
if curveProject.Length >= height-.5:
primary.append(False)
else:
primary.append(True)
else:
if curveProject.Length >= originLineLength-.5:
primary.append(False)
else:
primary.append(True)
centerPoint = curveProject.PointAtParameter(0.5)
pointList = []
projectedCrvCenterpoint.append(centerPoint)
## Project centerpoint of curve onto wall surface
for h in [centerPoint]:
pointUnwrap = UnwrapElement(centerPoint)
pointList.append(pointUnwrap.X)
pointList.append(pointUnwrap.Y)
pointList.append(pointUnwrap.Z)
pointCoords.append(pointList)
finalPoints.append(pointCoords)
isPrimary.append(primary)
projectedCrvs.append(projectedCrvCenterpoint)
TransactionManager.Instance.TransactionTaskDone()
TransactionManager.Instance.EnsureInTransaction(doc)
##Gather all segments of gridline geometry
for wall in UnwrapElement(walls):
gridSegments2 = []
gridSegmentsGeom2 = []
gridLines1 = []
gridLinesGeom1 = []
for id1 in wall.CurtainGrid.GetVGridLineIds():
gridLinesGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(doc.GetElement(id1).FullCurve))
gridLines1.append(doc.GetElement(id1))
VgridSegments1 = []
VgridSegmentsGeom1 = []
for i in doc.GetElement(id1).AllSegmentCurves:
VgridSegments1.append(i)
VgridSegmentsGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(i,True))
gridSegments2.append(VgridSegments1)
gridSegmentsGeom2.append(VgridSegmentsGeom1)
for id2 in wall.CurtainGrid.GetUGridLineIds():
gridLinesGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(doc.GetElement(id2).FullCurve))
gridLines1.append(doc.GetElement(id2))
UgridSegments1 = []
UgridSegmentsGeom1 = []
for i in doc.GetElement(id2).AllSegmentCurves:
UgridSegments1.append(i)
UgridSegmentsGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(i,True))
gridSegments2.append(UgridSegments1)
gridSegmentsGeom2.append(UgridSegmentsGeom1)
gridSegments.append(gridSegments2)
gridSegmentsGeom.append(gridSegmentsGeom2)
gridLines.append(gridLines1)
gridLinesGeom.append(gridLinesGeom1)
boolFilter = [[[[b.DoesIntersect(x) for x in d] for d in z] for b in a] for a,z in zip(projectedCrvs, gridSegmentsGeom)]
boolFilter2 = [[[b.DoesIntersect(x) for x in z] for b in a] for a,z in zip(projectedCrvs, gridLinesGeom)]
##Select gridline segments that intersect with centerpoint of projected lines
for x,y in zip(boolFilter,gridSegments):
keySegments2 = []
keySegmentsGeom2 = []
for z in x:
keySegments1 = []
keySegmentsGeom1 = []
for g,l in zip(z,y):
for d,m in zip(g,l):
if d == True:
keySegments1.append(m)
keySegmentsGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(m,True))
keySegments2.append(keySegments1)
keySegmentsGeom2.append(keySegmentsGeom1)
keySegments.append(keySegments2)
keySegmentsGeom.append(keySegmentsGeom2)
##Order gridlines according to intersection with projected points
for x,y in zip(boolFilter2, gridLines):
keyGridLines1 = []
keyGridLinesGeom1 = []
for z in x:
for g,l in zip(z,y):
if g == True:
keyGridLines1.append(l)
keyGridLinesGeom1.append(Revit.GeometryConversion.RevitToProtoCurve.ToProtoType(l.FullCurve,True))
keyGridLines.append(keyGridLines1)
keyGridLinesGeom.append(keyGridLinesGeom1)
##Add mullions at intersected gridline segments
TransactionManager.Instance.TransactionTaskDone()
TransactionManager.Instance.EnsureInTransaction(doc)
for x,y,z in zip(keyGridLines,keySegments,isPrimary):
projectedGridlines1 = []
for h,j,k in zip(x,y,z):
for i in j:
if i != None:
h.AddMullions(i,mullionType,k)
projectedGridlines1.append(h)
projectedGridlines.append(projectedGridlines1)
else:
None
if toggle == True:
OUT = projectedGridlines
else:
None
TransactionManager.Instance.TransactionTaskDone()
Apologies for the messiness of the code, it's a modification of another node that I've been working on. Thanks for your help.
Bo,
Your problem is rooted in how Dynamo is wrapping elements to use with its own model. That last call .ToDSType(True) is the gist of the issue. MullionType class is a subclass (it inherits properties) from a ElementType class in Revit. When Dynamo team wraps that object into a custom wrapper they only wrote a top level wrapper that treats all ElementTypes the same, hence this outputs an ElementType/FamilyType rather than a specific MullionType.
First I would suggest that you replace the line of code in your code:
mullionType = IN[3]
with:
mullionType = UnwrapElement(IN[3])
This is their built in method for unwrapping elements to be used with calls to Revit API.
If that still somehow remains an issue, you could try and retrieve the MullionType object again, this time directly in your script, before you use it. You can do so like this:
for x,y,z in zip(keyGridLines,keySegments,isPrimary):
projectedGridlines1 = []
for h,j,k in zip(x,y,z):
for i in j:
if i != None:
h.AddMullions(i,doc.GetElement(mullionType.Id),k)
projectedGridlines1.append(h)
projectedGridlines.append(projectedGridlines1)
This should make sure that you get the MullionType element before it was wrapped.
Again, try unwrapping it first, then GetElement() call if first doesn't work.

Resources