How can we ensure that the thermal transmittance takes into account the properties of the thermal asset assigned to the material? - revit-api

Steps to reproduce the issue
(Using Revit 2021.1.3)
Create materials through Revit API and assign them a thermal asset:
Assign material to a wall layer
See that thermal conductivity is filled but resistance is still 0:
Explored solution
Manual workaround
Manually modify any parameter like commentary
See that this time resistance is now not 0 which means that thermal asset is now taken into account
Things which did not work
Modify commentary through Revit API in a separate transaction
Current work in progress source code
Current work in progress source code can be found in pyRevitMEP repo

I found an example in Revit API documentation. Apparently setting thermal property set through ThermalAssetId property is not the way to go. We need to use SetMaterialAspectByPropertySet method instead.
revit_material = doc.GetElement(Material.Create(doc, layer_name))
thermal_asset = ThermalAsset(layer_name, ThermalMaterialType.Solid)
thermal_asset.ThermalConductivity = UnitUtils.ConvertToInternalUnits(
thermal_conductivity,
DisplayUnitType.DUT_WATTS_PER_METER_KELVIN,
)
thermal_property_set = PropertySetElement.Create(doc, thermal_asset)
material.SetMaterialAspectByPropertySet(MaterialAspect.Thermal, thermal_property_set.Id)

Related

Translation Error: ThermalZone; Medium.enthalpyOfCondensingGas

I am trying to run a simulation for the first time in OpenModelica with the Buildings library. I have a Thermal Zone and I am dragging and dropping components to the workspace. I connected an EP Spawn widget to the Zone Surface, then connected the Zone surface to the ThermalZone. I have encountered an error when checking the model
[Buildings.ThermalZones.EnergyPlus_9_6_0.ThermalZone: 81:3-82:48]: Function Medium.enthalpyOfCondensingGas not found in scope ThermalZone.
picture of thermal zone components
I click the link and the code clearly shows me a line where the function does exist.
Picture of Function Code
So, what does this mean in two sentences or less?
I am trying to run a check in OpenModelica using the LBL Buildings library and I encounter an error for the ThermalZone. The code seems fine. I want it to run.

Weights&Biases Sweep - Why might runs be overwriting each other?

I am new to ML and W&B, and I am trying to use W&B to do a hyperparameter sweep. I created a few sweeps and when I run them I get a bunch of new runs in my project (as I would expect):
Image: New runs being created
However, all of the new runs say "no metrics logged yet" (Image) and are instead all of their metrics are going into one run (the one with the green dot in the photo above). This makes it not useable, of course, since all the metrics and images and graph data for many different runs are all being crammed into one run.
Is there anyone that has some experience in W&B? I feel like this is an issue that should be relatively straightforward to solve - like something in the W&B config that I need to change.
Any help would be appreciated. I didn't give too many details because I am hoping this is relatively straightforward, but if there are any specific questions I'd be happy to provide more info. The basics:
Using Google Colab for training
Project is a PyTorch-YOLOv3 object detection model that is based on this: https://github.com/ultralytics/yolov3
Thanks! 😊
Update: I think I figured it out.
I was using the train.py code from the repository I linked in the question, and part of that code specifies the id of the run (used for resuming).
I removed the part where it specifies the ID, and it is now working :)
Old code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem,
id=ckpt.get('wandb_id') if 'ckpt' in locals() else None)
New code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem)

Is there a way to extract all bins assigned using the test_ids gem?

We use the test_ids gem to handle our binning assignment and it works great. We use that information to create some 3rd party files versus using the native ATE binning. The issue arises when we pass multiple flow files to the 'program' command.
origen p func_cpu_flow.rb func_gpu_flow.rb
In between flow generation the test interface gets reset and the binning information it knows about gets lost. Is there an API in the test_ids gem that would return a hash with keys being the test names and values being the bin information? Then I could call this method on the last flow file generation event and create the 3rd party files.
thx
It doesn't really provide anything like that today, though if you reach in this should be close to what you want:
TestIds.current_configuration.allocator.store['assigned']['bins']
TestIds.current_configuration.allocator.store['manually_assigned']['bins']

How to add custom environment map for background in autodesk forge?

I want to add environment map for background,I have tried viewer.setLightPreset(value) ,but I don't like the default map.I need to add custom environment map for background.I learned about Add Custom Light for the View and Data API Viewer,and added this code in my viewer
Autodesk.Viewing.Private.LightPresets.push({
name: "selfEvn",
path:"selfEvn",
type:"logluv",
tonemap:1,
E_bias: -2.0,
directLightColor: [0, 0.84, 0.67],
ambientColor: [0.8, 0.9, 1],
lightMultiplier: 0.1,
bgColorGradient: [230, 230, 230, 150, 150, 150],
darkerFade: !1
});
viewer3D.setLightPreset(Autodesk.Viewing.Private.LightPresets.length - 1);
Forge's file is used with the dds suffix file.I made the dds suffix file width NVIDIA Texture Tools for Adobe Photoshop,and put it under this path: res\environments.But viewer can't use my file.I opened the defaulet files under the path: res\environments,they are just look like this Default files.I don't know if my method is wrong or my files are wrong My files are just like images,but their suffix is DDS.
And my model was created by Revit
After checking with the dev team, there is no API available for converting and setting user owned background image (Environment map) for models from the Revit and the Model Derivative translation currently. Custom background image feature is only available for models of the Autodesk Fusion 360, but there is a known issue for image translating from the Fusion model which is investigated by the dev team now. We apologize for any inconvenience caused.
In addition, we cannot ensure the certainties and the stabilities of those private APIs. Private APIs are only for the internal usage of the Froge Viewer. Therefore, it's not recommended to use those APIs under the Autodesk.Viewing.Private namespace to partner developers like you.
However, we can log this request in our internal system for the dev team to allocate time to investigate. Maybe it will come true someday, but we have no idea when it will be. So, there is no any promise for it. Hope you will understand.

Saving the stream using Intel RealSense

I'm new to Intel RealSense. I want to learn how to save the color and depth streams to bitmap. I'm using C++ as my language. I have learned that there is a function ToBitmap(), but it can be used for C#.
So I wanted to know is there any method or any function that will help me in saving the streams.
Thanks in advance.
I'm also working my way through this, It seems that the only option is to do it manually. We need to get ImageData from PXCImage. The actual data is stored in ImageData.planes but I still don't understand how it's organized.
https://software.intel.com/en-us/articles/dipping-into-the-intel-realsense-raw-data-stream?language=en Here you can find example of getting depth data.
But I still have no idea what is pitches and how data inside planes is organized.
Here: https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/332718 kind of backwards process is described.
I would be glad if you will be able to get some insight from this information.
And I obviously would be glad if you've discovered some insight you can share :).
UPD: Here is something that looks like what we need, I haven't worked with it yet, but it sheds some light on internal organization of planes[0] https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/514663
UPD2: To add some completeness to the answer:
You then can create GDI+ image from data in ImageData:
auto colorData = PXCImage::ImageData();
if (image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB24, &colorData) >= PXC_STATUS_NO_ERROR) {
auto colorInfo = image->QueryInfo();
auto colorPitch = colorData.pitches[0] / sizeof(pxcBYTE);
Gdiplus::Bitmap tBitMap(colorInfo.width, colorInfo.height, colorPitch, PixelFormat24bppRGB, baseColorAddress);
}
And Bitmap is subclass of Image (https://msdn.microsoft.com/en-us/library/windows/desktop/ms534462(v=vs.85).aspx). You can save Image to file in different formats.

Resources