how to create virtual bacnet devices and implementing python library - bacnet

im very new to the bacnet protocol.I'm working in python , so i need help regarding python library called BAC0.
i already read BAC0 documentation on the internet and tried their examples but im not able to get the curect output.
please help me with some examples like how to create virtual bacnet devices and how to perform read and write operation on which using python programing.Thanks in advance.
i tried examples mentioned in the BAC0 documents
my_obj_list = [('file', 1),
('analogInput', 2),
('analogInput', 3),
('analogInput', 5),
('analogInput', 4),
('analogInput', 0),
('analogInput', 1)]
bacnet = BAC0.connect(ip='192.168.42.226/24')
mycontroller = BAC0.device('2:5',5,bacnet, object_list = my_obj_list)
print(mycontroller)
mycontroller.points
mycontroller['point_name']
2019-07-22 15:49:31,169 - WARNING | Offline: provide database name to load stored data.
2019-07-22 15:49:31,169 - WARNING | Offline: provide database name to load stored data.
2019-07-22 15:49:31,169 - WARNING | Offline: provide database name to load stored data.
2019-07-22 15:49:31,169 - WARNING | Offline: provide database name to load stored data.
2019-07-22 15:49:31,169 - WARNING | Offline: provide database name to load stored data.
also im getting an error :
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\DELL\Anaconda3\lib\site-packages\BAC0\core\devices\Device.py", line 688, in connect
self.properties.address, self.properties.device_id
File "C:\Users\DELL\Anaconda3\lib\site-packages\BAC0\core\io\Read.py", line 184, in read
"APDU Abort Reason : {}".format(reason)
BAC0.core.io.IOExceptions.NoResponseFromController: APDU Abort Reason : noResponse.```

mycontroller = BAC0.device('2:5',5,bacnet, object_list = my_obj_list)
You must have a BACnet controller on your network with those parameters to read them. The code you are running is not creating a virtual BACnet network. It is made to connect to real device.
If you want to create a virtual device, you can get inspiration here :
https://github.com/ChristianTremblay/BAC0/blob/master/tests/conftest.py
as in the test suite, I create virtual devices that talk to each other.

Related

Need help on Terraform OCI

I am trying to learn terraform on OCI, I have written a small code to in my terraform-code.tf file to create a block instance, however when I run the terraform plan I get the following error.
data "oci_identity_availability_domain" "ad" {
compartment_id = "var.tenancy_ocid"
}
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.oci_identity_availability_domain.ad: Refreshing state...
Error: Get https://identity.var.region.oraclecloud.com/20160918/availabilityDomains?compartmentId=ocid1.tenancy.oc1..aaaaaaaa35fzgotfw445uiswdvjcxnxitafa4scy4dmcuifrvvzkxylqga3q: dial tcp: lookup identity.var.region.oraclecloud.com: no such host
on terraform-code.tf line 46, in data "oci_identity_availability_domain" "ad":
46: data "oci_identity_availability_domain" "ad" {
I tried to ping identity.var.region.oraclecloud.com from my windows machine but no luck
ping identity.var.region.oraclecloud.com
Ping request could not find host identity.var.region.oraclecloud.com. Please check the name and try again.
I believe this is an issue with the proxy where for some reason I am unable to reach
identity.var.region.oraclecloud.com
I found a similar article on github : https://github.com/terraform-providers/terraform-provider-oci/issues/960
Can anyone help me to resolve this issue ?
var.region is a variable and should be substitued. It's normal you can't reach https://identity.var.region.oraclecloud.com as it doesn't exist. Here is a list of the existing regions.
A valide url would be for instance https://identity.us-ashburn-1.oraclecloud.com
To answer my own question, the ping test to identity.var.region.oraclecloud.com does not matter.
If you receive the below error most probably you are not passing your region ocid correctly in the required variables. To troubleshoot you can replace the variables with the actual OCID in double quotes as a string ""
Error: Get https://identity.var.region.oraclecloud.com/20160918/availabilityDomains?compartmentId=ocid1.tenancy.oc1..aaaaaaaa35fzgotfw445uiswdvjcxnxitafa4scy4dmcuifrvvzkxylqga3q: dial tcp: lookup identity.var.region.oraclecloud.com: no such host
on terraform-code.tf line 46, in data "oci_identity_availability_domain" "ad":
46: data "oci_identity_availability_domain" "ad" {
For me the issue was,
I was passing the variable information incorrectly.
With TF 0.11 variable information is set like
tenancy_ocid = "${var.tenancy_ocid}"
With TF 0.13 variable information is set like
tenancy_ocid = "${var.tenancy_ocid}" ( the old way would still work but you will receive a warning)
Or for troubleshooting you can simply use
tenancy_ocid = ""
I have just started to learn terraform with OCI and there are not many helpful posts around.

Why does it shows same logs twice (one as info and one as error with same message) on stackdriver?

I am preparing some logging using python, but whenever I run my code it generates the logs but shows twice on stackdriver console (one as info and one as error). Anyone have idea on how to deal with this problem.
my code:
import logging
from google.cloud import logging as gcp_logging
log_client = gcp_logging.Client()
log_client.setup_logging()
# here executing some bigquery operations
logging.info("Query result loaded into temporary table: {}".format(temporary_table))
# here executing some bigquery operations
logging.error("Query executed with empty result set.")
When I run above code it show above logs twice on stackdriver.
Info:2019-10-17T11:54:02.504Z cf-mycloudfunctionname Query result loaded into temporary table: mytable
Error:2019-10-17T11:54:02.505Z cf-mycloudfunctionname Query result loaded into temporary table: mytable
What I can see is that both (error & info) has been recognized as a plane text, so it sent the same message as info for stderr and stdout, this is why you are getting two same messages.
What you need to do is to correct phrase the this two logs into an structured JSON, this to be recognized by stackdriver as one entity with the correct payload to be displayed.
Additional, you can configure the stackdriver agent to send the logs as you need to be sent, take a look to this document.
Also It will depend from where you are trying to retrieve this logs GCE, GKE, BQ. In some cases its preferible to change the structure of fluentd directly instead of the stackdriver agent.

Could not import vxlapi: function 'xlCanReceive' not found

I was trying to compile a code which interfaces with Cancasexl to send and receive signals from ECU. The library which I am using is python-can.
But I am getting an error which says:
Could not import vxlapi: function 'xlCanReceive' not found
raise ImportError("The Vector API has not been loaded")
ImportError: The Vector API has not been loaded
I tried updating the CancaseXL drivers from the vector website thinking it might be a driver issue but still it shows up.
def Can_receive_all(self):
bus = can.interface.Bus(bustype='vector', channel=self.ch_list, bitrate=500000, app_name=self.can_app_name)
try:
while True:
recv_msg = bus.recv()
print(recv_msg)
# if recv_msg is not None:
# self.print_can_data(recv_msg)
except Exception as ex:
print(ex)
I expected the Rx signals from the ECU but getting an error.
When reading the Python-CAN documentation, it appears that you must have the Vector Hardware Config program in order to use the CANCase with Python-CAN. The documentation states the following :
Vector
This interface adds support for CAN controllers by Vector.
By default this library uses the channel configuration for CANalyzer. To use a
different application, open Vector Hardware Config program and create a new
application and assign the channels you may want to use. Specify the application name
as app_name='Your app name' when constructing the bus or in a config file.
Channel should be given as a list of channels starting at 0.
Here is an example configuration file connecting to CAN 1 and CAN 2 for an
application named “python-can”:
[default]
interface = vector
channel = 0, 1
app_name = python-can

WSO2 Stream Processor Connecting to Cassandra gives error

I am trying to connect my siddhi application to cassandra data store by following the instructions given in the example program given in the editor.
I downloaded the datastax java jar(Osgi) and placed it in the WSO2/lib folder and now started the application. Now I get an error
> [2019-03-10_16-41-17_549] ERROR {org.wso2.siddhi.core.table.Table} - Error on 'Store-cassandra'. . Error while connecting to Table 'SweetProductionTable'. (Encoded)
java.lang.NullPointerException
at org.wso2.extension.siddhi.store.cassandra.CassandraEventTable.connect(CassandraEventTable.java:443)
at org.wso2.siddhi.core.table.Table.connectWithRetry(Table.java:388)
at org.wso2.siddhi.core.SiddhiAppRuntime.startWithoutSources(SiddhiAppRuntime.java:401)
at org.wso2.siddhi.core.SiddhiAppRuntime.start(SiddhiAppRuntime.java:376)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:68)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:37)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:588)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$57(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-03-10_16-41-17_551] ERROR {org.wso2.siddhi.core.SiddhiAppRuntime} - Error starting Siddhi App 'Store-cassandra', triggering shutdown process. null (Encoded)
Below is the corresponding code
define stream SweetProductionStream (id int, name string);
#Store(type='cassandra' , cassandra.host='localhost' ,keyspace='production')
#index('uid')
#primaryKey('uid')
define table SweetProductionTable (uid int, name string);
/* Inserting event into the cassandra keyspace */
#info(name='query1')
from SweetProductionStream
select SweetProductionStream.id as uid, SweetProductionStream.name
insert into SweetProductionTable;
and below is the instructions given in the example
Prerequisites:
1) Ensure that Cassandra version 3 or above is installed on your machine.
2) Add the DataStax Java driver into {WSO2_SP_HOME}/lib as follows:
a) Download the DataStax Java driver from: http://central.maven.org/maven2/com/datastax/cassandra/cassandra-driver-core/3.3.2/cassandra-driver-core-3.3.2.jar
b) Use the "jartobundle" tool in {WSO2_SP_Home}/bin to extract and convert the above JARs into OSGi bundles.
For Windows: <SP_HOME>/bin/jartobundle.bat <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
For Linux: <SP_HOME>/bin/jartobundle.sh <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
Note: The driver given in the above link is a OSGi bundled one. Please skip this step if the jar is already OSGi bunbled.
c) Copy the converted bundles to the {WSO2_SP_Home}/lib directory.
3) Create a keyspace named 'production' in Cassanndra store.
4) In the store configuration of this application, replace 'username' and 'password' values with your Cassandra credentials.
5) Save this sample.
Executing the Sample:
1) Start the Siddhi application by clicking on 'Run'.
2) If the Siddhi application starts successfully, the following message is shown on the console
* Store-cassandra.siddhi - Started Successfully!
Note:
If you want to edit this application while it's running, stop the application, make your edits and save the application, and then start it again.
Testing the Sample:
1) Simulate single events:
a) Click on 'Event Simulator' (double arrows on left tab) and click 'Single Simulation'
b) Select 'Store-cassandra' as 'Siddhi App Name' and select 'searchSweetProductionStream' as 'Stream Name'.
c) Provide attribute values, and then click Send.
2) Send at least one event where the name matches a name value in the data you previously inserted into the SweetProductionTable. This will satisfy the 'on' condition of the join query.
3) Optionally, send events to the other corresponding streams to add, delete, update, insert, and search events.
Notes:
- After a change in the store, you can use the search stream to see whether the operation is successful.
- The Primary Key constraint in SweetProductionTable is disabled, because the name cannot be used as a PrimaryKey in a ProductionTable.
- You can use Siddhi functions to create a unique ID for the received events, which can then be used to apply the Primary Key constraint on the data store records. (http://wso2.github.io/siddhi/documentation/siddhi-4.0/#function)
Viewing the Results:
See the output for raw materials on the console. You can use searchSweetProductionStream to check for inserted, deleted, and updated events.
*/
Thanks in advance.
please provide your Cassandra credentials (username and password).
Eg: #store(type='cassandra' , cassandra.host='localhost', username='cassandra', password='cassandra',keyspace='production',
column.family='SweetProductionTable')
Please refer this sample.
Siddhi store casssandra

Cassandra throws unknown error while creating column family through CLI or Astayanx client

I am trying to use chunked object store of cassandra and when I am trying to create compatible column family for the same, using CLI or Astyanax client, it throws this error:
INFO 12:26:31,175 Create new ColumnFamily: org.apache.cassandra.config.CFMetaData#321f321f[cfId=1000,ksName=TESTKEYSPACE1,cfName=updatedstorage,cfType=Standard,
comparator=org.apache.cassandra.db.marshal.UTF8Type,subcolumncomparator=<null>,comment=,readRepairChance=0.1,dclocalReadRepairChance=0.0,replicateOnWrite=true,gc
GraceSeconds=864000,defaultValidator=org.apache.cassandra.db.marshal.UTF8Type,keyValidator=org.apache.cassandra.db.marshal.UTF8Type,minCompactionThreshold=4,maxC
ompactionThreshold=32,keyAlias=<null>,columnAliases=[],valueAlias=<null>,column_metadata={},compactionStrategyClass=class org.apache.cassandra.db.compaction.Size
TieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor},bloomFilterFpCha
nce=<null>,caching=KEYS_ONLY]
Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 ExceptionCode=c0000005 ExceptionAddress=01372DED ContextFlags=0001007f
Handler1=00259A50 Handler2=003C95C0 InaccessibleAddress=000000B4
EDI=00000080 ESI=45F2A04A EAX=00000022 EBX=45E5A300
ECX=46B34830 EDX=00000080
EIP=01372DED ESP=46B347C4 EBP=46174800 EFLAGS=00010206
Module=C:\Program Files (x86)\IBM\SDP\jdk\jre\bin\jclscar_24.dll
Module_base_address=01340000 Offset_in_DLL=00032ded
Target=2_40_20080816_022093_lHdSMr (Windows Vista 6.1 build 7601 Service Pack 1)
CPU=x86 (4 logical CPUs) (0xf88ee000 RAM)
----------- Stack Backtrace -----------
sun_misc_Unsafe_getLong__Ljava_lang_Object_2J:0x01372DED [0x01372DB0 +0x0000003D]
---------------------------------------
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP007I JVM Requesting System dump using 'C:\cassandra\apache-cassandra-1.1.7\bin\core.20130204.122631.9232.0001.dmp'
My CLI command is:
CREATE COLUMN FAMILY updatedstorage WITH comparator = UTF8Type AND key_validation_class=UTF8Type;
Astayanax code is:
keyspace.createColumnFamily(CF_STANDARD1, ImmutableMap.<String, Object>builder()
.put("default_validation_class", "UTF8Type")
.put("key_validation_class", "UTF8Type")
.put("comparator_type", "UTF8Type")
.build());
Cassandra version: 1.1.7
Could you please help understanding what is the problem, everytime I run it, it goes to JVM dump ... :(

Resources