I'm trying to create a new family parameter by calling a family's document in a project document and using the FamilyManager method to edit the family. There have been about 10 people asking for this on the Dynamo forums, so I figured I'd give it a shot. Here's my Python script below:
import clr
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *
clr.AddReference("RevitServices")
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
clr.AddReference("RevitAPI")
from Autodesk.Revit.DB import *
#The inputs to this node will be stored as a list in the IN variables.
familyInput = UnwrapElement(IN[0])
familySymbol = familyInput.Symbol.Family
doc = familySymbol.Document
par_name = IN[1]
par_type = ParameterType.Text
par_grp = BuiltInParameterGroup.PG_DATA
TransactionManager.Instance.EnsureInTransaction(doc)
familyDoc = doc.EditFamily(familySymbol)
OUT = familyDoc.FamilyManager.AddParameter(par_name,par_grp,par_type,False)
TransactionManager.Instance.TransactionTaskDone()
When I run the script, I get this error:
Warning: IronPythonEvaluator.EvaluateIronPythonScript operation failed.
Traceback (most recent call last):
File "<string>", line 26, in <module>
Exception: The document is currently modifiable! Close the transaction before calling EditFamily.
I'm assuming that this error is because I am opening a family document that already exists through the script and then never sending the information back to the project document? Or something similar to that. Any tips on how to get around this?
Building up on our discussion from the forum:
import clr
clr.AddReference("RevitServices")
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
doc = DocumentManager.Instance.CurrentDBDocument
clr.AddReference("RevitAPI")
from Autodesk.Revit.DB import *
par_name = IN[0]
exec("par_type = ParameterType.%s" % IN[1])
exec("par_grp = BuiltInParameterGroup.%s" % IN[2])
inst_or_typ = IN[3]
families = UnwrapElement(IN[4])
# class for overwriting loaded families in the project
class FamOpt1(IFamilyLoadOptions):
def __init__(self): pass
def OnFamilyFound(self,familyInUse, overwriteParameterValues): return True
def OnSharedFamilyFound(self,familyInUse, source, overwriteParameterValues): return True
trans1 = TransactionManager.Instance
trans1.ForceCloseTransaction() #just to make sure everything is closed down
# Dynamo's transaction handling is pretty poor for
# multiple documents, so we'll need to force close
# every single transaction we open
result = []
for f1 in families:
famdoc = doc.EditFamily(f1)
try: # this might fail if the parameter exists or for some other reason
trans1.EnsureInTransaction(famdoc)
famdoc.FamilyManager.AddParameter(par_name, par_grp, par_type, inst_or_typ)
trans1.ForceCloseTransaction()
famdoc.LoadFamily(doc, FamOpt1())
result.append(True)
except: #you might want to import traceback for a more detailed error report
result.append(False)
trans1.ForceCloseTransaction()
famdoc.Close(False)
OUT = result
image of the Dynamo graph
The error message is already telling you exactly what the problem is: "The document is currently modifiable! Close the transaction before calling EditFamily".
I assume that TransactionManager.Instance.EnsureInTransaction opens a transaction on the given document. You cannot call EditFamily with an open transaction.
That is clearly documented in the help file:
http://thebuildingcoder.typepad.com/blog/2012/05/edit-family-requires-no-transaction.html
Close the transaction before calling EditFamily, or, in this case, don't open it at all to start with.
Oh, and then, of course, you wish to modify the family document. That will indeed require a transaction, but on the family document 'familyDoc', NOT on the project document 'doc'.
I don't know whether this will be the final solution, but it might help:
familyDoc = doc.EditFamily(familySymbol)
TransactionManager.Instance.EnsureInTransaction(familyDoc)
OUT = familyDoc.FamilyManager.AddParameter(par_name,par_grp,par_type,False)
TransactionManager.Instance.TransactionTaskDone()
Related
I'm debugging existing code. I'm trying to find out the intention of the obviously wrong access to .dicts of a peewee Model in the warning statement in MyDbBackend.store and how I could correct that.
I guess that the warning message should add more detailed output to the model which could not be saved. However, the .dicts attribute exists in orm.BaseQuery class, only.
The output message is currently not very helpful. I want to provide an improved warning message given that the i.save fails. With "improved" i mean to provide some meta informations about the record which failed to be saved.
So, how can i obtain the BaseQuery from the model and what would .dicts output, then? Would that information be useful in the context of the warning message?
import peewee as orm
database = orm.Proxy()
class ModelBase(orm.Model):
class Meta:
database = database
class MyModel(ModelBase):
dtfield = orm.DateTimeField(null=True)
intfield = orm.IntegerField(null=True)
floatfield = orm.FloatField(null=True)
class MyDbBackend:
def __init__(self, database):
self.db = database
self.records = [] # Holds objects derived from ModelBase
[...]
def store(self):
with self.db.atomic():
for i in self.records:
try:
i.save()
except Exception as e:
logger.warning("could not save record: {}".format(i.dicts()))
raise e
self.clear()
->
logger.warning("could not save record: {}".format(i.dicts()))
AttributeError: 'MyModel' object has no attribute 'dicts'
I guess that the original code was meant to make use of playhouse.shortcuts.model_to_dict.
This is the only idea I have why the original code uses i.dict().
Perhaps some misunderstanding.
import peewee as orm
from playhouse.shortcuts import model_to_dict
[...]
logger.warning(f"Model dict: {model_to_dict(i, recurse = True, max_depth = 2)}")
[...]
I'm working on Insight automation groovy script, but I got stuck in one point.
I have Insight object called "Agreement".
This object has Inbound References called "Services".
Each Agreement can have any number of Services. I need to get list of all Services for any Agreement. I found the method findObjectInboundReferencedBeans(int id) in the docs, but apparently, I'm missing something as when I run the script, I get an error in log:
AutomationRuleGroovyScriptAction, Unexpected error: No signature of method: com.riadalabs.jira.plugins.insight.channel.external.api.facade.impl.ObjectFacadeImpl.findObjectInboundReferencedBeans() is applicable for argument types: (java.lang.Integer) values: [15748]
Here is my script:
import com.atlassian.jira.component.ComponentAccessor;
import com.riadalabs.jira.plugins.insight.services.model.ObjectAttributeBean;
import com.riadalabs.jira.plugins.insight.services.model.ObjectBean;
import com.riadalabs.jira.plugins.insight.services.model.MutableObjectAttributeBean;
import com.riadalabs.jira.plugins.insight.services.model.MutableObjectBean;
Class objectFacadeClass = ComponentAccessor.getPluginAccessor().getClassLoader().findClass("com.riadalabs.jira.plugins.insight.channel.external.api.facade.ObjectFacade")
def objectFacade = ComponentAccessor.getOSGiComponentInstanceOfType(objectFacadeClass)
// def agreementStatusId = 2641
// def serviceStatusId = 2180
// def activeStatus = 1
// def stoppedStatus = 6
def attributeRef = "Status smlouvy"
def objectKey = object.getObjectKey();
def insightObject = objectFacade.loadObjectBean(objectKey)
int objectId = insightObject.getId()
----------------------------------
**// Here is the line I need help with
def inRef = objectFacade.findObjectInboundReferencedBeans(objectId)**
----------------------------------
def objectAttribute = objectFacade.loadObjectAttributeBean(objectId, attributeRef)
def objectAttributeValue = objectAttribute.getObjectAttributeValueBeans()[0].getValue()
log.warn(objectKey.toString())
log.warn(insightObject.toString())
log.warn(inRef)
I use this script also to get attribute value from Agreement object, which works fine.
I guess the problem is that I call the method on wrong object, but when I try to call in dirrectly on "insightObject" , I got the same error.
Thank you!
As answered on the Atlassian Community Page.
The method was removed with the release of Insight 5.1, see the upgrade notes here.
However, you can just use this to solve your issue:
def inRef = iqlFacade.findObjects(/object HAVING outboundReferences(Key = ${objectKey})/)
Tooling:
Raspberry Pi 3B
Raspbian
BME280
Python3
Flask
Sqlite3
Error code:
Traceback (most recent call last):
File "BME280_DataCollector.py", line 65, in <module>
File "BME280_DataCollector.py", line 45, in logData
sqlite3.OperationalError: unable to open database file
Working on Raspbian and want to store my sensor data in sqlite3 database.
Somehow following error code occurs:
"sqlite3.OperationalError: unable to open database file".
Firstly, I thought that I requested the database file too quickly and changed the the measurement time to minutes, but the error is still reproducible.
I looked into /tmp by df /tmp. But this file system is used by 12 % and not overloaded.
Also, I tried to give the full path and also the database all write and read permissions via chmod, but also no differents. In addition, I put the full path to the code.
Furthermore, I tried to make try and exception approaches which also weren't fruitful.
Nevertheless, I wanted to know if this failure occures at a certain number of interactions with the database. I found out that it always stopped at the 1020th interaction.
I also tried to restart the python script with a shell script but it didn't work out due to lack of experience and knowledge.
Code:
from flask import Flask,render_template,url_for,request,redirect, make_response, send_file
import random
import json
from time import sleep
from random import random
from flask import Flask, render_template, make_response
import datetime
import sqlite3
import sys
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
import io
import os
import smbus2
import bme280
## FUNCTIONS ##
# get data
def getBMEdata():
port = 1
adress = 0x77
bus = smbus2.SMBus(port)
calibration_params = bme280.load_calibration_params(bus, adress)
bme_data = bme280.sample(bus, adress, calibration_params)
temp = '%.2f' % bme_data.temperature
hum = '%.2f' % bme_data.humidity
press = '%.2f' % bme_data.pressure
now = datetime.datetime.now() #get time
timeString = now.strftime('%d-%m-%Y %H:%M:%S') #write time to string
return temp, hum, press, timeString
# function to insert data on a table
def logData(temp, hum, press, timeString):
conn = sqlite3.connect(dbname)
curs = conn.cursor()
curs.execute("INSERT INTO BME280_data values((?), (?), (?), (?))", (timeString, temp, hum, press))
conn.commit()
conn.close()
# display data base
def displayData():
conn = sqlite3.connect(dbname)
curs = conn.cursor()
print("\nEntire database contents:\n")
for row in curs.execute("SELECT * FROM BME280_data"):
print(row)
conn.close()
## MAIN
if __name__ == '__main__':
count = 0
dbname = '/home/pi/projects/SmartPlanting/Sensors_Database/sensorsData.db'
sampleFreq = 60 #data collect every minute
while True:
temp, hum, press, timeString = getBMEdata() #get data
logData(temp, hum, press, timeString) #save data
sleep(sampleFreq) #wait
displayData() #show in terminal
#count = count+1
#print(count)
Maybe someone already solved this problem or can give me an alternative to sqlite3 which works with flask.
Suggestion: add a more complete exception handling routine, because your stacktrace could be more verbose.
But judging from your trace the offending line #45 could be this: conn.commit() (or the line above). Python is already helping you pinpoint the error. There is something wrong in function logData.
Could it be that you are feeding incorrect data to your table BME280_data ? To debug your application I would strongly recommend that you print log the data you are trying to insert (use the logging module to output to file and/or console). I don't know the structure of your table but some of your fields could have a definition (data type) that is not compatible with the data you are trying to insert. The fact that you are able to predictably reproduce the problem is quite telling and my hunch is that the data could be the cause.
To sum up: take good habits now and add at least basic exception handling.
A quality application should have exception handling and log errors so they can be reviewed and remediated by a human. This is even more important for unattended applications, because you are not in front of the console and you may not even have a chance to see problems that occur.
Here is one tutorial that may help: https://code.tutsplus.com/tutorials/error-handling-logging-in-python--cms-27932
After hours of debugging, and because my organization does not have a lot of Python expertise, I am turning to this community for help.
I am trying to follow this tutorial with the goal of committing some data to the database. Although no errors get reported, I am also not saving any rows. What am I doing wrong?
When trying to commit using the db2Session, I get:
Transaction must be committed using the transaction manager.
But nowhere in the tutorial, do I see the transaction manager being used. I thought that this manager is bound using zope.sqlalchemy? Yet, nothing is happening otherwise. Help again would be really appreciated!
I have the following setup in my main function in a Pyramid App:
from sqlalchemy import engine_from_config
from .models import db1Session, db2Session
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
db1_engine = engine_from_config(settings, 'db1.')
db2_engine = engine_from_config(settings, 'db2.')
db1Session.configure(bind=db1_engine)
db2Session.configure(bind=db2_engine)
In .models/__init__py, I have:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import (scoped_session, sessionmaker)
from zope.sqlalchemy import ZopeTransactionExtension
db1Session = scoped_session(sessionmaker(
extension=ZopeTransactionExtension()))
db2Session =
scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Base = declarative_base()
In ./model/db2.py I have:
class PlateWellResult(Base):
__tablename__ = 'SomeTable'
__table_args__ = {"schema": 'some_schema'}
id = Column("ID", Integer, primary_key=True)
plate_id = Column("PlateID", Integer)
hit_group_id = Column("HitID", Integer, ForeignKey(
'some_schema.HitGroupID.ID'))
well_loc = Column("WellLocation", String)
The relevant bits of my saving function look like this. ./lib/db2_api.py:
def save_selected_rows(input_data, selected_rows, hit_group_id):
""" Wrapper method for saving selected rows """
# Assume I have all the right data below.
new_hit_row = PlateWellResult(
plate_id=master_plate_id,
hit_group_id=hit_group_id,
well_loc=selected_df_row.masterWellLocation)
db1Session.add(new_hit_row)
# When I try the row below:
# db2Session.commit()
# I get: Transaction must be committed using the transaction manager
# If I cancel the line above, nothing gets committed.
return 'Save successful.'
That function is called from my viewer:
#view_config(route_name='some_routename', renderer='json',
permission='create_hit_group')
def save_to_hitgroup(self):
""" Backend to AJAX call to save selected rows to a hit_group """
try:
# Assume that all values were checked and all the right
# parameters are passed
status = save_selected_rows(some_result,
selected_rows_list,
hitgroup_id)
json_resp = json.dumps({'errors': [],
'status': status})
return json_resp
except Exception as e:
json_resp = json.dumps({'errors': ['Error during saving. {'
'0}'.format(e)],
'status': []})
return json_resp
The comments above are good. I just wanted to summarize here.
The transaction manager is begun/committed/aborted by pyramid_tm. If you aren't using that then it's likely the issue.
You are also squashing possible database exceptions which need to be conveyed to the transaction manager. You can do this via transaction.abort() in the exception handler.
I am currently running a script that is supposed to create an SQLAlchemy Database that should be populated by the information below; This DB is linked to a Flask based, tasklist application. I am using SQLAlchemy 2.1.
This is the db_create.py script.
# project/db_create.py
from views import db
from models import Task
from datetime import date
# create the database and the db table
db.create_all()
# insert data
db.session.add(Task("Finish this", date(2016, 9, 22), 10, 1))
db.session.add(Task("Finish Python", date(2016, 10, 3), 10, 1))
# commit the changes
db.session.commit()
Now specifically I am receiving this error whenever I try to create the database:
/Users/Paul/Desktop/RealPython/flasktaskr/ENV/lib/python3
.6/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.
warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.')
Traceback (most recent call last):
File "db_create.py", line 8, in <module>
db.session.add(Task("Finish this", date(2016, 9, 22), 10, 1))
TypeError: __init__() takes 1 positional argument but 5 were given
I have tracked down an init() function in one of the files within the app, maybe this might be of some help to you:
def __init__(self, name, due_date, priority, status):
self.name = name
self.due_date = due_date
self.priority = priority
self.status = status
When I run the db_create.py file, the database IS created, however, it fails to populate the DB with the data within that original file.
Why am I receiving the error, and why is the DB failing to populate?
You need to add an __init__() method to your sqlalchemy class definition for Task
See the docs for an example.
The first positional argument is always self (read about object oriented programming to understand more about that), but essentially this error is saying that you are passing arguments and your class definition doesn't know what to do with them.