BACnet devices vs BACnet objects - bacnet

As a beginner in the world of BACnet I am looking into clarification on BACnet devices. If I have a system that I want to be compatible with BACnet I am assuming that the system is considered to be an object such as an analog input and that the actual device is a controller or similar that has multiple objects that plug into it. The ASHREA standard 135-2016 states that there is supposed to be exactly one device object type in each device. Is each system on a BACnet(ie HVAC, sensors, lighting, doors, or anything else) considered a device or an object? Thanks for any help!

A system (such as an air handler), is normally controlled via 1 or more devices. Within each device, each real-world piece of data (such as the measured air temperature, or the control signal sent to a motor) would be a input, an output or a value object.
The control logic for a complex system, like an air handler, would be contained within the collection of devices as either fixed firmware, programmable control modules, or through the configuration of some of the complex standard BACnet object types.
For example, control of a damper within an air handler might be handled by a BACnet Loop object (PID loop) tied to an Analog Output object; detection of adverse conditions with the damper might be monitored by a Event Enrollment object; and a log of the damper's performance might be generated by a Trend Log object. And the overall control logic for the air handler might be handled by a collection of Program objects.
Stepping back and looking at a larger part of the HVAC system, cooperation between the air handler, and the VAV boxes which distribute the conditioned air, might be handled by the Program objects in the air handler devices reading and writing input, output, and value objects in the collection of devices which control the VAV boxes.

Is each system on a BACnet(ie HVAC, sensors, lighting, doors, or
anything else) considered a device or an object?
Anything within a BACnet network with an "instance number" (device address) is considered a DEVICE.
Despite the fact that each device must have an internal object (of type 8=device) wich represents itself. In other words, the "device object" is just a way to represent the properties of it's device, but it is NOT the device itself, nor the device is an object.
I hope I have clarified your doubt.
Cheers!

I'll try to narrow the focus a little more to your question; it's kinda slightly incestual in terms of the self-referential (/almost recursive) nature of the model - at least for the one case of a 'device'.
Here's my stab at a simple summary:-
In terms of a explicit/concrete model, BACnet models the essential/high-level pieces (/players or actors within the model) as either an 'object', or a 'property' hanging-off an (/a parent) object (- most likely just one of a related set of properties that happen to belong to the parent object that's referencing it).
Each object belongs to a class/grouping of objects, or rather more correctly, an object-type (- each object is stamped with one of the standard types - in order to identify it's core/base capabilities).
You have physical devices (- "Woo-hoo! I can touch it!!" ;D).
And then you have BACnet's logical rendition of a device - it's (effectively) not a standalone item within the BACnet model - at least not as much/like an 'object' or 'property' is - it's more so only of interest in terms of 'how I get there'/location of the objects treasure chest - to side-step the device & jump straight into showing interest in it's contained 'objects' (- "stuff my wonderful [object] children, what about me [the parent device]!?" ;P); a 'device' in the BACnet world is represented as a specific one of the object-types - a 'DEVICE'-type object (or rather an object whose 'object-type' property is set to 'DEVICE').
So, not only is it more so a gateway to accessing the real items of interest within our OOP (Object Orientated Programming)-like model - the 'objects' & their associated (/child) 'properties'.
But - and here's where it's a little self-referential/egg-&-chicken; it's also represented as an object itself (above & beyond the more typical case if it just been the location where the set of device objects reside upon the BACnet InterNetwork) - that happens to be the keeper of the list of all the objects that relate to that physical device (- at least as portrayed/exposed by the vendor thru the interpretive dance of BACnet & with a slightly vendor proprietary/open rendition of how the values are conveyed), of which, the 'object-list' property of the (logical device/) 'device' object also contains a reference to the 'device' object/itself. =S
If you read this a few times, it should make some sense; and if it does, then it might at first glance seem a little like nonsense/not immediately intuitive (at least not without seeing it in front of your own eyes). ;P

Related

What peer-to-peer protocol has the shortest specification?

I want to implement a P2P protocol in C for personal education purposes.
What would be the protocol with the shortest specification that is still used today?
I have already implemented a web and IRC client and server.
I agree with Mark, that point to point over a serial link would be a good exercise.
In particular, I would recommend the following programme of stuff...
Implement basic transmission over a "Serial Port" (using RS-232 if you have some Arduinos/embedded processors lying around, or using a null modem emulator if you don't (see com0com on Windows, or this on linux/mac).
I.e. send lower case letters from A->B, and echo them back as upper case from B->A
Implement SLIP as a way to reliably frame messages
i.e. you can send any string (e.g. "hello") and it is returned in upper case with "WORLD" appended ("HELLOWORLD").
Implement the "Read Multiple Holding Registers" and "Write Multiple Holding Registers" part of the Modbus protocol, using SLIP to frame the messages.
I.e. you have one follower (slave) device, and one leader (master) device. The follower has 10 bytes of memory that are exposed over modbus with the initial value "helloworld".
Just hard-code the follower / leader device Ids for now.
The leader reads the value, and then sets it to be "worldhello".
At the end of this you would start to have an understanding of the roles of network layers, ie:
The physical layer - Serial/RS-232
A "link layer" of sorts - SLIP
An "application" layer - Modbus
Serial. The answer is serial. You're not going to get any leaner than simple RX/TX communication but you'll lack a lot of convenience methods. If you want to explore more than simple bidirectional comms, I2C or modbus open up a lot of options.

Autosar Network Management SWS 4.2.2. - Partial networking

In Autosar NM 4.2.2 NM PDU Filter Algorithm,
What is significance of CanNmPnFilterMaskByte . I understood that it is used to Mask(AND) with incoming NM PDU with Partial Network info and decide to participate in communication or not. But please explain how exactly it works in brief.
You are actually talking about Partial Networking. So, if certain functional clusters are not needed anymore, they can go to sleep and save power.
ECUs supporting PN check all NmPdus for the PartialNetworkingInfo (PNI, where each bit represents a functional cluster status) in the NmPdus UserData.
The PnFilterMask actually filters out any irrelevant PNI info, the ECU is not interested in at all (because the ECU does not contribute in any way to these functions). If after applying the filter, everything is 0, the NmPdu is discarded, and does therefore not cause a restart of the Nm-Timeout Timer. Which brings actually the Nm into the Go-to-sleep phase, even though, NmPdus are still transmitted.
By ECU, also consider Gateways.
Update how to determine the mask
As described above, each bit represents a function.
Bit0 : Func0
..
Bit7: Func7
The OEM would now have to check, which ECUs in the Vehicle are necessary for which functions (also at certain state) required or not, and how to layout the vehicle networks.
Here are some function examples, and ECUs required excluding gateways:
ACC : 1 radar sensor front
EBA : 1 camera + 1..n radar sensor front
ParkDistanceControl (PDC): 4 Front- + 4 Rear Sensors + Visualization in Dashboard
Backup Camera: 1 Camera + Visualization ECU (the lines which tell according to steering angle / speed where the vehicle would move within the camera picture)
Blind Spot Detection (BSD) / LaneChangeAssist (LCA): 2 Radar sensors in the rear + MirrorLed Control and Buzzer Control ECU
Rear Cross Traffic Assist (RCTA) (w/ or w/o Brake + Alert): 2 Radar Sensors in the rear + MirrorLed Control and Buzzer Control ECU
Occupant Safe Exit (warn or keep doors closed in case something approaches): 2 rear radar sensors + DoorLock ECU(s)
The next thing is, that some functions are distributed over several ECUs.
e.g. the 2 rear radar sensors can do the whole BSD/LCA, RCTA, OSE functions, including maybe LED driver for the MirrorLEDs and a rear buzzer driver, or the send this information over CAN to a central ECU which handles the MirrorLEDs and a rear buzzer. (such short range radar sensors is, what I'm doing now for a long time, and the number of different functions grows over the years)
The camera can have some companion radar sensors (e.g. the one where ACC runs on or some short range radars) to help verify/classify image data / obejcts.
The PDC sensors are maybe also small ECUs giving out some information to a central PDC ECU, which actually handles the output to the dashboard.
So, not all of them need to be activated all the time and pull on the battery.
BSD/LCA, RCTA/B need to work while driving or parking, RCTA/B only when reverse gear is selected, BSD/LCA only with forward gear or neutral, PDC only when parking (low speed forward/reverse), Backup Camera only when reverse gear is in for parking, OSE can be active while standstill, with engine on (e.g. drop off passenger at traffic light) or without engine on (driver leaves and locks vehicle).
Now, for each of these cases, you need to know:
which ECUs are still required for each vehicle state and functional state
the network topology telling you, how these ECUs are connected.
You need to consider gateway ECUs here, since they have to route certain information between multiple networks.
You would assign 1 bit of the Nm Flags per function or function cluster (e.g. BSD/LCA / RCTA = 1bit, OSE = 1bit, BackupCam / PDC (e.g. "Parking mode") = 1bit
e.g. CanNmPnInfo Flags might be defined as:
Bit0 : PowerTrain
Bit1 : Navi/Dashboard Cluster
Bit2 : BSD/LCA/RCTA
Bit3 : ParkingMode
Bit4 : OSE
...
Bit7 : SmartKeyAutomaticBackDoor (DoorLock with key in near to detect swipe/motion to automatically backdoor)
It may also be possible to have CL15 devices without PNI, because the functions are only active while engine is on like ACC, EBA, TrafficJamAssist ... (even BSD/LCA/RCTA could be considered like that). You could handle them maybe without CL30 + PNI.
So, you now have an assignment of function to a bit in the PNI, and you know which ECUs are required.
e.g. the radar sensors in the rear need 0x34 (Bits 2,3,4), even though, they need to be aware of, that some ECUs might not deliver infos anymore, since they are off (e.g. Speed, SteeringAngle on Powertrain turned off after CL15 off -> OSE) and this is not an error (CAN Message Timeouts).
The gateway might need some more bits in the mask, in order to keep subnetworks alive, or to actually wake up some networks and their ECUs (e.g. Remote Key waking up DoorLock ECUs)
So a gateway in the rear might have 0xFC as a mask, but a front gateway 0x03.
The backup camera might be only activated in low-speed (<20km/h) and reverse gear, to power it up but PDCs can work without reverse gear.
The PNI flags are actually usually define by the OEM, because it is a vehicle level architectural item. This can not be defined usually by a supplier.
It should be actually part of the AUTOSAR ARXML SystemDescription. (see AUTOSAR_TPS_SystemTemplate.pdf)
EcuInstance --> CanCommunicationConnector (pnc* Attributes)
Usually, the AUTOSAR Configuration Tools should support to automatically extract this information to configure CanNm / Nm and ComM (User Requests) automatically.
Sorry for the delay, but finding a example to describe it can be quite tedious,
But I hope it helps.

Enterprise Architect: Model a simple ECU

I've used Enterprise Architect (EA) to create pretty drawings and I've liked it for that purpose. I find that I have a lack of understanding on how diagram elements link between one another. What I find particularly frustrating is that there is very little documentation on how this linking works (although lots of documentation on how to draw pictures).
I would like to create a model of a simple processor/ECU (electronics control unit). Here is the behaviour:
An ECU has an instance of NVRAM (which is just a class) for an attribute
An ECU has a voltage supply (an analog value representing the voltage level supplied to the ECU)
An ECU has two digital input ports
Each digital input port fires signals when its value changes
the ECU has a state machine with three states; the state machine enters state 1 on entry; the state machine transitions to state 2 on a firing of either digital input ports so long as the ECU voltage supply is greater than 10 V
the ECU exists to state 3 when Voltage drops below 8; and goes back to normal processing when Voltage rises above 9
Can you develop a model that demonstrates how these elements interact? (Is there some reference I can read on how to understand this approach?)
Here's my first attempt:
State Machine
I used a composite diagram in the ECU state so that I could have access to the digital ports diagramatically. I created a link for each port so that they "realize" class input PIn. I assume I can depict class attributes this way.
I "create a link" so that the DIO triggers realize the DIO ports. Not sure I can do this.
The class state machine is where I get lost. Not sure on how to create a trigger for ECU.Voltage < 8.

Modbus TCP/IP to BACnet

Firstly I am new to this and I have tried googling for answer but figured it is best ask the experts.
There is a building management system (BMS) that is using BACnet protocol but my equipment logger only has Modbus TCP/IP. I understand that the market has a converter for this but I will like to know the concept.
Modbus TCP/IP has registry values (e.g 40135) which is dedicated to a specific parameter reading. How does the converter read this registry value in the BACnet BMS? Do you have to specific this registry value in the converter software for the output at BACnet?
In general, what should be input at the BACnet end to read the equipment parameter such as power received?
In this situation, is the BACnet BMS consider the MASTER and the equipment as SLAVE?
I hope someone can take some time to clear my doubts on this. Will really appreciate it.
Thank you.
A couple of assumptions on my end regarding the setup:
Your equipment is acting as a "Modbus/TCP Slave" (i.e. it will respond to
polls from a Modbus/TCP Master)
The converter then acts as a Modbus/TCP Master
And then the converter acts as a BACnet slave/server (or in BACnet terminology, a "B" device)
And your BMS system polls the converter as a BACnet master/client/"A" device
That is the normal setup. Then the converter device has the responsiblity to poll your equipment for the value from the Modbus register, and this is normally only a 16 bit integer, or in some cases, vendors pack a float into two 16 bit integers using a variety of byte-order and floating point formats. It is a mess. Nevertheless, the converter, if is a good one, will allow you to unpack the value into a float, and provision it with some BACnet specific metadata ("Properties") such as Units, BACnet Object Type, Object Instance, Reliability flags etc. etc. and make this new object discoverable by any BMS system.
More sophisticated converters can add other BACnet services, such as Change-of-Value (COV), Intrinsic Alarming, Trend Logging if desired. This is of course, dependent on the particular vendor.
Just to add to what's already been said; a lot of the time - out in the field, there are gateway devices, that encompass the conversion process for you, so unless you're the one setting-up the BMS, you generally don't have to concern yourself with the conversion specifics.
If the device's 'Max(imum) ADPU Length' is set to 480, then the device is probably a Modbus device (/a Modbus device is probably sitting behind the gateway's/converter's (object) point.

Linux dma driver dma_cap_set,dma_cap_zero

I'm writing a linux device driver for an dma and while going across the source of dma drivers in LXR i came across the functions dma_cap_zero and dma_cap_set and whole family of dma_cap_* . What are these functions ?
Also there a structure called dma_transaction_type
enum dma_transaction_type {
DMA_MEMCPY,
DMA_XOR,
DMA_PQ,
DMA_XOR_VAL,
DMA_PQ_VAL,
DMA_MEMSET,
DMA_INTERRUPT,
DMA_SG,
DMA_PRIVATE,
DMA_ASYNC_TX,
DMA_SLAVE,
DMA_CYCLIC,
DMA_INTERLEAVE,
/* last transaction type for creation of the capabilities mask */
DMA_TX_TYPE_END,
};
What do the enum types represent ?
These functions are actually preprocessor macro functions, and are used by slave DMA devices to configure and request DMA channels.
Here is an example of them being used:
dma_cap_mask_t mask;
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY,mask);
dma_chan1 = dma_request_channel(mask,0,NULL);
This code is from http://ecourse.wikidot.com/dmatest.
First, there's the datatype dma_cap_mask_t defined in dmaengine.h, ~line 233. It is a type of bitfield where the bits indicate what kind of transfers a DMA channel is capable of.
In the code snippet above, which occurs in the linked code's __init routine, the mask is declared to be of the special dma_cap_mask_t datatype. Then the dma_cap_zero() function is called and mask is passed to it.
I believe dma_cap_zero is simply zeroing out the capability mask. It is defined in dmaengine.h, ~line 733. The function has returns void, and I think is zeroing the bitfield. I'm not entirely sure, though, because kernel code is a massive pile of macro magic that I have a hard time deciphering sometimes.
After the mask is zeroed, or initialized in some fashion by dma_cap_zero, the capabilities of the channel must be set. The dma_cap_set function accomplishes this. It takes the request channel type and sets the mask according to the capabilities required to perform that type of transaction. If you're confused about how the enumeration is being used, take a look at this page for a simple review of enums. In this case, it looks like the values in the enum are used to describe different types of DMA transactions, each of which need a different set of "capabilities". The dma_set_cap function sets the capabilities mask according to the capabilities required for the specified transaction type.
Once the mask is properly set for the type of DMA transactions you want to perform, you request the DMA channel.
The other dma_cap* macros are used to perform other types of manipulations on the DMA mask without really knowing what's going on behind the scenes. These types of macros are all over the place in the kernel code, for many more manipulations that just DMA. They allow device drivers to get things done in the kernel without having to worry about how the kernel does it.

Resources