Revit Python Wrapper - revit-api

i'm getting into revit python wrapper / revit python shell and am having trouble on a very simple task.
I have one wall in my project and I'm just trying to change the top offset from 0'- 0" to 4'-0". I've been able to change the Comments in the properties but that's about it.
Here's my code:
import rpw
from rpw import revit, db, ui, DB, UI
element = db.Element.from_int(352690)
with db.Transaction('Change height'):
element.parameters['Top Offset'].value = 10
Here's my error:
[ERROR] Error in Transaction Context: has rolled back.
Exception : System.Exception: Parameter is Read Only: Top Offset
at Microsoft.Scripting.Interpreter.ThrowInstruction.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.HandleException(InterpretedFrame frame, Exception exception)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)
at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx)
at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope)
at Microsoft.Scripting.Hosting.ScriptSource.ExecuteAndWrap(ScriptScope scope, ObjectHandle& exception)
Any and all help is appreciated. I've read the docs however they dont seem to go over Read Only items.
I'm in revit 2019. RPS is using python 2.7.7

I think this is a "Revit Python Wrapper" (RPW) question more than a "RevitPythonShell" (RPS) one, Im familiar with the way transactions are handled in RPS but the documentation for RPW seems quite different.
This is what your code would look like in RevitPythonShell:
import clr
clr.AddReference('RevitAPI')
clr.AddReference('RevitAPIUI')
from Autodesk.Revit.DB import *
from Autodesk.Revit.UI import *
app = __revit__.Application
doc = __revit__.ActiveUIDocument.Document
ui = __revit__.ActiveUIDocument
element = doc.GetElement(ElementId(352690))
t = Transaction (doc, 'Change Height')
t.Start()
parameter = element.GetParameters('Top Offset')[0]
parameter.Set(10)
t.Commit()

Related

Need to set JMS_IBM_LAST_MSG_IN_GROUP property to true for IBM MQ testing using JMeter

I am testing IBM MQ using JMeter and able to establish the connection with queue to send requests over it. However, I need to set "JMS_IBM_LAST_MSG_IN_GROUP" property as true for one of the message but unable to do so. I am using below piece of code while sending request or trying to set the property to true but it remains set to it's default value i.e. false when I am checking in backend. Any clue what I am missing here.
Note: Connection is being established in another sampler and making use of that connection here. This code is working fine to send any request, just that property is not getting set to true.
import java.time.Instant
import com.ibm.msg.client.jms.JmsConstants
def sess = System.getProperties().get("Session")
def destination = System.getProperties().get("Destination")
def producer = sess.createProducer(destination)
def rnd = new Random(System.currentTimeMillis())
def payload = String.format('${groupid}|${sequencenumber}|rest of the payload|')
def msg = sess.createTextMessage(payload)
println('Payload --> ' + payload)
msg.setBooleanProperty(JmsConstants.JMS_IBM_LAST_MSG_IN_GROUP,true)
def start = Instant.now()
producer.send(msg)
def stop = Instant.now()
producer.close()
SampleResult.setResponseData(msg.toString())
SampleResult.setDataType( org.apache.jmeter.samplers.SampleResult.TEXT)
SampleResult.setLatency( stop.toEpochMilli() - start.toEpochMilli())
Your code does not include anything to set the Group ID or sequence number. I assume we have all the relevant code shown, in which case, I think you are missing code something along these lines:
msg.setStringProperty("JMSXGroupID", groupid);
msg.setIntProperty("JMSXGroupSeq", sequencenumber);
As per JMS_IBM_LAST_MSG_IN_GROUP chapter of IBMMQ documentation
This property is ignored in the publish/subscribe domain and is not relevant when an application connects to a service integration bus.
In general it's not necessary to use this property, you can come up with your own custom one, i.e. for the producer:
msg.setBooleanProperty("X_CUSTOM_PROPERTY_LAST_MESSAGE",true)
and for the consumer:
msg.getBooleanProperty("X_CUSTOM_PROPERTY_LAST_MESSAGE")
More information: IBM MQ testing with JMeter - Learn How
Sharing as it might help someone else. Was able to set the property to true by making below changes, rest of the code is same as mentioned in original question
import com.ibm.msg.client.wmq.WMQConstants
def gid=String.format('${groupid}')
def msg = sess.createTextMessage()
println('Payload --> ' + payload)
msg.setStringProperty(WMQConstants.JMSX_GROUPID,gid)
msg.setBooleanProperty(WMQConstants.JMS_IBM_LAST_MSG_IN_GROUP,true)
msg.text=payload

Google cloud function (python) does not deploy - Function failed on loading user code

I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.

400 Caller's project doesn't match parent project

I have this block of code that basically translates text from one language to another using the cloud translate API. The problem is that this code always throws the error: "Caller's project doesn't match parent project". What could be the problem?
translation_separator = "translated_text: "
language_separator = "detected_language_code: "
translate_client = translate.TranslationServiceClient()
# parent = translate_client.location_path(
# self.translate_project_id, self.translate_location
# )
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = (
os.getcwd()
+ "/translator_credentials.json"
)
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
try:
result = str(
translate_client.translate_text(
request={
"contents": [text],
"target_language_code": self.target_language_code,
"parent": f'projects/{self.translate_project_id}/'
f'locations/{self.translate_location}',
"model": self.translate_model
}
)
)
print(result)
except Exception as e:
print("error here>>>>>", e)
Your issue seems to be related to the authentication method that you are using on your application, please follow the guide for authention methods with the translate API. If you are trying to pass the credentials using code, you can explicitly point to your service account file in code with:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'service_account.json')
Also, there is a codelab for getting started with the translation API with Python, this is a great step by step getting started guide for running the translate API with Python.
If the issue persists, you can try creating a Public Issue Tracker for Google Support

is there a way to convert a python script into one class or a package?

I am migrating data from IBM to Snowflake in
3 stages- extract, transform and load.
Below is the python code that connects source IBM and destination Snowflake which does the ETL.
is there any way I can create a class/ package out of the entire below code?
import snowflake.connector
tableName='F58001'
ctx = snowflake.connector.connect(
user='*',
password='*',
account='*.azure'
)
cs = ctx.cursor()
ctx.cursor().execute("USE DATABASE STORE_PROFILE")
ctx.cursor().execute("USE SCHEMA LANDING")
try:
ctx.cursor().execute("PUT file:///temp/data/{tableName}/* #%{tableName}".format(tableName=tableName))
except Exception:
pass
ctx.cursor().execute("truncate table {tableName}".format(tableName=tableName))
ctx.cursor().execute("COPY INTO {tableName} ON_ERROR = 'CONTINUE' ".format(tableName=tableName,
FIELD_OPTIONALLY_ENCLOSED_BY = '""', sometimes=',', ERROR_ON_COLUMN_COUNT_MISMATCH = 'TRUE'))
last_query_id= ctx.cursor().execute("select last_query_id()")
for res in last_query_id:
query_id = res[0]
ctx.cursor().execute(f"create or replace table save_copy_errors as select * from
table(validate("+tableName+", job_id=> "+"'"+query_id+"'"+"))")
ax = ctx.cursor().execute("select * from save_copy_errors")
for errors in ax:
error = errors
print(error)
ctx.close()
Please look at the below repository. It probably has answer to your question. I am currently working on moving it to PYPI so that it can be installed with PIP
https://github.com/Infosys/Snowflake-Python-Development-Framework

Elastic Search via python gives wrong count

I’m new to python and I need to get connected to “Kibana” via python. we’re using Kibana 7.4.1. The requirement is to get them just the count (hits).
Due to some restrictions, I need to use Python 3.6 only. I’ve added the “ElasticSearch” & “ElasticSearch-dsl” library.
I’m able to get connected to the Kibana via the client, but I’m getting the wrong hits count.
Code:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import MultiSearch, Search
from elasticsearch_dsl.query import QueryString, Range, SimpleQueryString
es = Elasticsearch(['host2', 'host2'], http_auth=('usr', 'pass'), port=9200)
s = Search(using=es, index='c*')
s.filter(SimpleQueryString(query="tags:prod AND severity:INFO AND service: finder AND msg:* is processed"))
s.filter(Range(** {'#timestamp': {'gte': 'now-5m', 'lt': 'now'}}))
response = s.execute()
print("Got %d Hits:" % response['hits']['total']['value']) # Always coming as 1000 so this is wrong
Can I get some help with this, please?
First of all a little clarification. You are connecting to Elasticsearch and not Kibana (Kibana is a client, like the program you are writing).
You are receiving always 10000 as result, because your index has more than 10000 hits. It is a documented feature. Indeed, since the count computation is expensive in the general case it is performed only when needed. In order to obtain the right number of results you have two possibilities
to set the query parameter track_total_hits to true
use the count API.
track_total_hits
You can add this extra parameter to the search object as reported here as follows:
s = Search(using=es, index='c*')
s = s.extra(track_total_hits=True)
<the-rest of your code>
Count API approach
Instead of invoking the execute() function, you can simply use the count() function:
s = Search(using=es, index='c*')
s.filter(SimpleQueryString(query="tags:prod AND severity:INFO AND service: finder AND msg:* is processed"))
s.filter(Range(** {'#timestamp': {'gte': 'now-5m', 'lt': 'now'}}))
response = s.cpunt()
print("Got %d Hits:" % response)
Kind regards

Resources