Subscribe simulated data using OPC Publisher from OPC UA server - azure

I'm able to connect the OPC UA simulation Server and OPC Publisher and could see the messages on the Azure portal.
Is it mandatory to mention the node details to get data or Is there any possibility to subscribe all the data from the OPC UA server
i.e. need to provide the node details to subscribe
{
"Id": "ns=3;i=1001",
"OpcSamplingInterval": 2000,
"OpcPublishingInterval": 5000,
"DisplayName": "Counter"
},
{
"Id": "ns=6;s=MyLevel",
"OpcSamplingInterval": 2000,
"OpcPublishingInterval": 5000,
"DisplayName": "MyLevel"
}
]
Also, OPC UA server if the subscriber joined in between. Is there any options to read a message from the beginning
or
if the subscriber is not available in between due to some issue for 2-3 sec, does the server captures the missed messages and send them when the subscriber is available again?

Related

Apache Pulsar Client - Broker notification of Closed consumer - how to resume data feed?

TLDR: using python client library to subscribe to pulsar topic. logs show: 'broker notification of consumer closed' when something happens server-side. subscription appears to be re-established according to logs but we find later that backlog was growing on cluster b/c no msgs being sent to our subscription to consume
Running into an issue where we have an Apache-Pulsar cluster we are using that is opaque to us, and has a namespace defined where we publish/consume topics, is losing connection with our consumer.
We have a python client consuming from a topic (with one Pulsar Client subscription per thread).
We have run into an issue where, due to an issue on the pulsar cluster, we see the following entry in our client logs:
"Broker notification of Closed consumer"
followed by:
"Created connection for pulsar://houpulsar05.mycompany.com:6650"
....for every thread in our agent.
Then we see the usual periodic log entries like this:
{"log":"2022-09-01 04:23:30.269 INFO [139640375858944] ConsumerStatsImpl:63 | Consumer [persistent://tenant/namespace/topicname, subscription-name, 0] , ConsumerStatsImpl (numBytesRecieved_ = 0, totalNumBytesRecieved_ = 6545742, receivedMsgMap_ = {}, ackedMsgMap_ = {}, totalReceivedMsgMap_ = {[Key: Ok, Value: 3294], }, totalAckedMsgMap_ = {[Key: {Result: Ok, ackType: 0}, Value: 3294], })\n","stream":"stdout","time":"2022-09-01T04:23:30.270009746Z"}
This gives the appearance that some connection has been re-established to some other broker.
However, we do not get any messages being consumed. We have an alert on Grafana dashboard which shows us the backlog on topics and subscription backlog. Eventually it either hits a count or rate thresshold which will alert us that there is a problem. When we restart our agent, the subscription is re-establish and the backlog is can immediately be seen heading to 0.
Has anyone experienced such an issue?
Our code is typical:
consumer = client.subscribe(
topic='my-topic',
subscription_name='my-subscription',
consumer_type=my_consumer_type,
consumer_name=my_agent_name
)
while True:
msg = consumer.receive()
ex = msg.value()
i haven't yet found a readily-available way docker-compose or anything to run a multi-cluster pulsar installation locally on Docker desktop for me to try killing off a broker and see how consumer reacts.
Currently Python client only supports configuring one broker's address and doesn't support retry for lookup yet. Here are two related PRs to support it:
https://github.com/apache/pulsar/pull/17162
https://github.com/apache/pulsar/pull/17410
Therefore, setting up a multi-nodes cluster might be nothing different from a standalone.
If you only specified one broker in the service URL, you can simply test it with a standalone. Run a consumer and a producer sending messages periodically, then restart the standalone. The "Broker notification of Closed consumer" appears when the broker actively closes the connection, e.g. your consumer has sent a SEEK command (by seek call), then broker will disconnect the consumer and the log appears.
BTW, it's better to show your Python client version. And GitHub issues might be a better place to track the issue.

Subscribing and reading from Topic: ActiveMQ & Python

I am trying to subscribe to a topic in ActiveMQ running in localhost using stompest for connecting to the broker. Please refer below code:
import os
import json
from stompest.sync import Stomp
from stompest.config import StompConfig
CONFIG = StompConfig(uri=os.environ['MQ_URL'],
login=os.environ['MQ_UID'],
passcode=os.environ['MQ_DWP'],
version="1.2")
topic = '/topic/SAMPLE.TOPIC'
msg = {'refresh': True}
client = Stomp(CONFIG)
client.connect()
client.send(topic, json.dumps(msg).encode())
client.disconnect()
client = Stomp(CONFIG)
client.connect(heartBeats=(0, 10000))
token = client.subscribe(topic, {
"ack": "client",
"id": '0'
})
frame = client.receiveFrame()
if frame and frame.body:
print(f"Frame received from MQ: {frame.info()}")
client.disconnect()
Although I see active connection it the ActiveMQ web console, no message is received in the code. The flow of control seems to pause at frame = client.receiveFrame().
I didn't find any reliable resource or documentation regarding this.
Am I doing anything wrong here?
This is the expected behavior since you're using a topic (i.e. pub/sub semantics). When you send a message to a topic it will be delivered to all existing subscribers. If no subscribers are connected then the message is discarded.
You send your message before any subscribers are connected which means the broker will discard the message. Once the subscriber connects there are no messages to receive therefore receiveFrame() will simply block waiting for a frame as the stompest documentation notes:
Keep in mind that this method will block forever if there are no frames incoming on the wire.
Try either sending a message to a queue and then receiving it or creating an asynchronous client first and then sending your message.

nodejs rhea npm for amqp couldn't create subscription queue on address in activemq artemis

I have an address "pubsub.foo" already configured as multicast in broker.xml.
<address name="pubsub.foo">
<multicast/>
</address>
As per the Artemis documentation:
When clients connect to an address with the multicast element, a subscription queue for the client will be automatically created for the client.
I am creating a simple utility using rhea AMQP Node.js npm to publish messages to the address.
var connection = require('rhea').connect({ port: args.port, host: args.host, username:'admin', password:'xxxx' });
var sender = connection.open_sender('pubsub.foo');
sender.on('sendable', function(context) {
var m = 'Hii test'
console.log('sent ' + m);
sender.send({body:m});
connection.close();
});
I enabled debug log and while running the client code I see the message like this.
2020-02-03 22:43:25,071 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage#68933e4b is not going anywhere as it didn't have a binding on address:pubsub.foo
I also tried different variations of the topic, for example, client1.pubsub.foo, pubsub.foo::client1 however no luck from the client code. Please share your thoughts. I am new to ActiveMQ Artemis.
What you're observing actually is the expected behavior.
Unfortunately, the documentation you cited isn't as clear as it could be. When it says a subscription queue will be created in response to a client connecting it really means a subscriber not a producer. That's why it creates a subscription queue. The semantics for a multicast address (and publish/subscribe in general) dictate that a message sent when there are no subscribers will be dropped. Therefore, you need to create a subscriber and then send a message.
If you want different semantics then I recommend you use anycast.

Process Azure IoT hub events from a single device only

I'm trying to solve for having thousands of IoT devices deployed, all logging events to Azure IoT hub, then being able to read events created by a single deviceid only.
I have been playing with EventProcessorHost to get something like this working, but so far I can only see a way to read all messages from all devices.
Its not a feasible solution to read all the messages and filter client side as there may be millions of messages.
The major purpose of the Azure IoT Hub is an ingestion of mass events from the devices to the cloud stream pipeline for their analyzing in the real-time manner. The default telemetry path (hot way) is via a built-in Event Hub, where all events are temporary stored in the EH partitions.
Besides that default endpoint (events), there is also capability to route an event message to the custom endpoints based on the rules (conditions).
Note, that the number of custom endpoints is limited to 10 and the number of rules to 100. If this limit is matching your business model, you can very easy to stream 10 devices individually, like is described in the Davis' answer.
However, splitting a telemetry stream pipeline based on the sources (devices) over this limit (10+1), it will require to use additional azure entities (components).
The following picture shows a solution for splitting a telemetry stream pipeline based on the devices using a Pub/Sub push model.
The above solution is based on forwarding the stream events to the Azure Event Grid using a custom topic publisher. The event schema for Event Grid eventing is here.
The Custom Topic Publisher for Event Grid is represented by Azure EventHubTrigger Function, where each stream event is mapped into the Event Grid event message with a subject indicated a registered device.
The Azure Event Grid is a Pub/Sub loosely decoupled model, where the events are delivered to the subscribers based on their subscribed subscriptions. In other words, if there is no match for delivery, the event message is disappeared.
Note, that the capable of Event Grid routing is 10 millions events per second per region. The limit of the number of subscriptions is 1000 per region.
Using the REST Api, the subscription can be dynamically created, updated, deleted, etc.
The following code snippet shows an example of the AF implementation for mapping the stream event to the EG event message. As you can see it is very straightforward implementation:
run.csx:
#r "Newtonsoft.Json"
#r "Microsoft.ServiceBus"
using System.Configuration;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.EventGrid.Models;
using Microsoft.ServiceBus.Messaging;
using Newtonsoft.Json;
// reusable client proxy
static HttpClient client = HttpClientHelper.Client(ConfigurationManager.AppSettings["TopicEndpointEventGrid"], ConfigurationManager.AppSettings["aeg-sas-key"]);
// AF
public static async Task Run(EventData ed, TraceWriter log)
{
log.Info($"C# Event Hub trigger function processed a message:{ed.SequenceNumber}");
// fire EventGrid Custom Topic
var egevent = new EventGridEvent()
{
Id = ed.SequenceNumber.ToString(),
Subject = $"/iothub/events/{ed.SystemProperties["iothub-message-source"] ?? "?"}/{ed.SystemProperties["iothub-connection-device-id"] ?? "?"}",
EventType = "telemetryDataInserted",
EventTime = ed.EnqueuedTimeUtc,
Data = new
{
sysproperties = ed.SystemProperties,
properties = ed.Properties,
body = JsonConvert.DeserializeObject(Encoding.UTF8.GetString(ed.GetBytes()))
}
};
await client.PostAsJsonAsync("", new[] { egevent });
}
// helper
class HttpClientHelper
{
public static HttpClient Client(string address, string key)
{
var client = new HttpClient() { BaseAddress = new Uri(address) };
client.DefaultRequestHeaders.Add("aeg-sas-key", key);
return client;
}
}
function.json:
{
"bindings": [
{
"type": "eventHubTrigger",
"name": "ed",
"direction": "in",
"path": "<yourEventHubName>",
"connection": "<yourIoTHUB>",
"consumerGroup": "<yourGroup>",
"cardinality": "many"
}
],
"disabled": false
}
project.json:
{
"frameworks": {
"net46":{
"dependencies": {
"Microsoft.Azure.EventGrid": "1.1.0-preview"
}
}
}
}
Finally, the following screen snippet shows an event grid event message received by AF subscriber for Device1:
If you're ok with Java/Scala, this example shows how to create a client and filter messages by device Id:
https://github.com/Azure/toketi-iothubreact/blob/master/samples-scala/src/main/scala/A_APIUSage/Demo.scala#L266
The underlying client reads all the messages from the hub though.
You could also consider using IoT Hub message routing, more info here:
https://azure.microsoft.com/blog/azure-iot-hub-message-routing-enhances-device-telemetry-and-optimizes-iot-infrastructure-resources
https://azure.microsoft.com/blog/iot-hub-message-routing-now-with-routing-on-message-body

Posting multiple data in IoT gateway Thingsboard

I just now started using Thingsboard and I came across this one,https://thingsboard.io/docs/iot-gateway/getting-started/. I have implemented it but the problems that I'm facing are,
1.I can transmit only one Key-value pair. How can I transmit multiple key-value sensor data?
2.Also if there is any other way to access the Cassandra Database so that I can retrieve all mine data to Thingsboard.
Please help. Thanking you.
You are asking two very different things.
1) You can transmit more key-value pairs at once by correctly mapping the gateway incoming messages. I suppose you are working with MQTT protocol. The default mapping for this protocol is specified in /etc/tb-gateway/conf/mqtt-config.json. This file specifies how to translate the incoming MQTT messages from the broker into the ThingsBoard key-value format, before sending to the server instance of ThingsBoard.
To map more than one reading from sensor, you can do somethings like this:
{
"brokers": [
{
"host": "localhost",
"port": 1883,
"ssl": false,
"retryInterval": 5000,
"credentials": {
"type": "anonymous"
},
"mapping": [
{
"topicFilter": "WeatherSensors",
"converter": {
"type": "json",
"filterExpression": "",
"deviceNameJsonExpression": "${$.WeatherStationName}",
"timeout": 120000,
"timeseries": [
{
"type": "double",
"key": "temperature",
"value": "${$.temperature}"
},
{
"type": "double",
"key": "humidity",
"value": "${$.humidity}"
}
]
}
}
]
}
]
}
This way, if you send a message like {"WeatherStationName":"test", "temperature":25, "humidity":40} to the topic WeatherSensors you will see the two key-value pairs in ThingsBoard server, in a device named "test".
2) The best way to access data stored in the internal ThingsBoard server is via REST API, so that you can query any ThingsBoard instance with the same piece of code regardless of the technology used for the database (Cassandra, PostgreSQL, etc.). You can find a Python example in this repo.
The alternative is to use a specific query language for the database, such as SQL for PostgreSQL or CQL for Cassandra.
For example, humidity, temperature, gas.
In this case you use one access token/single mqtt session and send data in single json like this
{"humidity":42.2, "temperature":23.3, "gas":45}
If you have multiple sensors attached to single device, send them like this
{"sensorA.humidity":42.2, "sensorB.temperature":23.3, "sensorC.gas":45}
Available topics are static and listed here:
https://thingsboard.io/docs/reference/mqtt-api/#telemetry-upload-api

Resources