Java Card count data usage of mobile user using USIM CAT event or similar - javacard

When a user consumes data in the Local Break Out (LBO) roaming mode, the actual data usage is not visible AFAIK to the home network. The CAT offers an event for call control but I don't see a functionality to monitor the data usage. Also in the USIM file system I see no file recording the data usage, again I see only files for the call metering.
Is there any chance to get the amount of consumed data from an applet?

Related

Sending files with varying lengths over lwm2m

I am using Eclipse Leshan to access the resources of a zolertia RE-MOTE. Long story short, I want to send a binary file from my laptop to the board. However, I see that the leshan server may not start the transmission, depending on the file size. More specifically, I see that files that are 64B, 128B can be transmitted while files of 705 Bytes cannot be transmitted (just an example). In addition, this limitation does not hold if the file is larger than 1Kb, as in this case all the files that I have tested managed to be transmitted. Do you know what may go wrong? Is it normal?
That depends in the first place from your client: what do you use?
Your client is required to implement RFC7959 - CoAP blockwise transfer.
Leshan's CoAP communication is based on Eclipse/Californium. To limit misuse, it requires to be configured with the largest expected resource body in the "Californium.properties" using the property "MAX_RESOURCE_BODY_SIZE=???" default is 8192.
If that doesn't help, please try to capture the traffic and post it (preferred as issue in Eclipse/Californium).

How to enforce the order of messages passed to an IoT device over MQTT via a cloud-based system (API design issue)

Suppose I have an IoT device which I'm about to control (lets say switch on/off) and monitor (e.g. collect temperature readings). It seems MQTT could be the right fit. I could publish messages to the device to control it and the device could publish messages to a broker to report temperature readings. So far so good.
The problems start to occur when I try to design the API to control the device.
Lets day the device subscribes to two topics:
/device-id/control/on
/device-id/control/off
Then I publish messages to these topics in some order. But given the fact that messaging is typically an asynchronous process there are no guarantees on the order of messages received by the device.
So in case two messages are published in the following order:
/device-id/control/on
/device-id/control/off
they could be received in the reversed order leaving the device turned on, which can have dramatic consequences, depending on the context.
Of course the API could be designed in some other way, for example there could be just one topic
/device-id/control
and the payload of individual messages would carry the meaning of an individual message (on/off). So in case messages are published to this topic in a given order they are expected to be received in the exact same order on the device.
But what if the order of publishes to individual topics cannot be guaranteed? Suppose the following architecture of a system for IoT devices:
/ control service \
application -> broker -> control service -> broker -> IoT device
\ control service /
The components of the system are:
an application which effectively controls the device by publishing messages to a broker
a typical message broker
a control service with some business logic
The important part is that as in most modern distributed systems the control service is a distributed, multi instance entity capable of processing multiple control messages from the application at a time. Therefore the order of messages published by the application can end up totally mixed when delivered to the IoT device.
Now given the fact that most MQTT brokers only implement QoS0 and QoS1 but no QoS2 it gets even more interesting as such control messages could potentially be delivered multiple times (assuming QoS1 - see https://stackoverflow.com/a/30959058/1776942).
My point is that separate topics for control messages is a bad idea. The same goes for a single topic. In both cases there are no message delivery order guarantees.
The only solution to this particular issue that comes to my mind is message versioning so that old (out-dated) messages could simply be skipped when delivered after another message with more recent version property.
Am I missing something?
Is message versioning the only solution to this problem?
Am I missing something?
Most definitely. The example you brought up is a generic control system, being attached to some message-oriented scheme. There are a number of patterns that can be used when referring to a message-based architecture. This article by Microsoft categorizes message patterns into two primary classes:
Commands and
Events
The most generic pattern of command behavior is to issue a command, then measure the state of the system to verify the command was carried out. If you forget to verify, your system has an open loop. Such open loops are (unfortunately) common in IT systems (because it's easy to forget), and often result in bugs and other bad behaviors such as the one described above. So, the proper way to handle a command is:
Issue the command
Inquire as to the state of the system
Evaluate next action
Events, on the other hand, are simply fired off. As the publisher of an event, it is not my business to worry about who receives the event, in what order, etc. Now, it should also be pointed out that the use of any decent message broker (e.g. RabbitMQ) generally carries strong guarantees that messages will be delivered in the order which they were originally published. Note that this does not mean they will be processed in order.
So, if you treat a command as an event, your system is guaranteed to act up sooner or later.
Is message versioning the only solution to this problem?
Message versioning typically refers to a property of the message class itself, rather than a particular instance of the class. It is often used when multiple versions of a message-based API exist and must be backwards-compatible with one another.
What you are instead referring to is unique message identifiers. Guids are particularly handy for making sure that each message gets its own unique id. However, I would argue that de-duplication in message-based architectures is an anti-pattern. One of the consequences of using messaging is that duplicates are possible, so you should try to design your system behaviors to be stateless and idempotent. If this is not possible, it should be considered that messaging may not be the correct communication solution for the need.
Using the command-event dichotomy as an example, you could perform the following transaction:
The controller issues the command, assigning a unique identifier to the command.
The control system receives the command and turns on.
The control system publishes the "light on" event notification, containing the unique id of the command that was used to turn on the light.
The controller receives the notification and correlates it to the original command.
In the event that the controller doesn't receive notification after some timeout, the controller can retry the command. Note that "light on" is an idempotent command, in that multiple calls to it will have the same effect.
When state changes, send the new state immediately and after that periodically every x seconds. With this solution your systems gets into desired state, after some time, even when it temporarily disconnects from the network (low battery).
BTW: You did not miss anything.
Apart from the comment that most brokers don't support QOS2 (I suspect you mean that a number of broker as a service offerings don't support QOS2, such as Amazon's AWS IoT service) you have covered most of the major points.
If message order really is that important then you will have to include some form of ordering marker in the message payload, be this a counter or timestamp.

OpenNMS threshold checks only one server

So I'm trying to configure OpenNMS to check the disk space on my linux servers.
After some work I got it to check one server through SNMP :
I installed snmpd on the server I'm monitoring, defined a threshold(in fact I use the predefined default one) and connected it to an event that triggers when ns-dskPercent goes to high. up until here all went well.
Now I added a second server, installed the same stuff on it, it seems to monitor the snmp daemon and notifies me when the service is down, but it doesn't seem to see the threshold.
When I make changes in the threshold - for example lower it to 20% in order to force it to trigger - only the first server sees that it changed (and also gives a notification that the configuration has changed) and fires the alarm, but the second server doesn't respond.
(These are the notifications I get on the first server:)
High threshold rearmed for SNMP datasource ns-dskPercent on interface
xxx.xxx.xxx.xxx, parms: label="/" ds="ns-dskPercent" description="ns-dskPercent"
value="NaN (the threshold definition has been changed)" instance="1"
instanceLabel="_root_fs" resourceId="node[9].dskIndex[_root_fs]"
threshold="20.0" trigger="1" rearm="75.0" reason="Configuration has been changed"
High threshold exceeded for SNMP datasource ns-dskPercent on interface
xxx.xxx.xxx.xxx, parms: label="/" ds="ns-dskPercent" description="ns-dskPercent"
value="52" instance="1" instanceLabel="_root_fs"
resourceId="node[9].dskIndex[_root_fs]" threshold="20.0" trigger="1" rearm="75.0"
Any ideas why or how I can make the second server to respond also?
The issue could be based upon the source of the data collected. Thresholding in modern versions of OpenNMS (14+) is evaluated inline and in memory as data is collected, so you must ensure that the threshold is evaluated against the exact metrics the node you are interested in contains.
There are usually two forms that file system metrics on linux systems come in- mib2 use of the host resources table (hrStorageSize, etc in $OPENNMS_HOME/etc/datacollection/mib2.xml) or net-snmp metrics from the net-snmp MIB (ns-dskTotal, etc in $OPENNMS_HOME/etc/datacollection/netsnmp.xml).
So, first verify that you are getting good data from the new server and that it is, indeed, collecting metrics from the same MIB table that you seek to threshold against.

Designing a message processing system

I have been asked to create a message processing system as following. As I am not sure if this is the right place to post this, feel free to move it to any other appropriate SC group.
Problem
Server have about 100 to 500 clients connected at every moment. When a client connects to server, server loads part of their data and cache it in memory for faster access. Server will receive between 200~1000 messages per second for all clients. These messages are relatively small (about 500 bytes). Any changes to data in cache should be saved to disk as soon as possible. When client disconnects all their data is saved to disk and removed from cache. each message contains some instruction and a text message which will be saved as file. Instructions should be executed as fast as possible (near instant) and all clients using that file should get the update. Only writing the modified message to disk can be delayed.
Here is my solution in a diagram
My solution consists of a web server (http or socket) a message queue and two or more instances of file server and instruction server.
Web server grabs client messages and if there is message available for client in message queue, pushes it back to client.
Instruction processor grabs instructions from queue and creates necessary message to be processed by file server (Get/set file) and waits for the file to be available in queue and more process to create another message for client.
File server only provides the files, either from cache or physical file depending on type of file.
Concerns:
There are peak times that total connected clients might go over 10000 at once and total messages received from clients increase to 10~15K.
I should be able to clear the queue and go back to normal state as soon as possible (with processing requests obviously).
I should be able to add extra instruction processors and file servers on the fly without having to shut down the other instances.
In case file server crashes it shouldn’t lose files so it has to write files to disk as soon as there are any changes and process time is available.
File system should be in b+ tree format so some applications (local reporting apps) could easily access files without having to go through queue server
My Solution
I am thinking of using node.js for socket/web server. And may be a NoSQL database for file server and a queue server such as rabbitMQ or Node_Redis and Redis.
Questions:
Is there a better way of structuring this system?
What are my other options for components of this system?
is it possible to run all the instances in same server machine or even in same application (in different thread)?
You have a couple of holes here, mostly around the web server "pushing" the message back to the client. That doesn't really work in a web-based world. You can try and use websockets, but generally, this ends up being polling based.
I don't know what the "instructions" are to be executed, but saving 1000 500byte messages is trivial. Many NoSQL solutions boast million+ write per second capacity. Especially if you let committing to disk to lag.
Don't bother with the queue for the return of the file. A good NoSQL solution will scale better. Build out a Cassandra cluster, load test it until it can handle your peak load.
This simplifies your architecture into a 1 or more web servers, clients polling that server for file updates, a queue for submitting "messages" to the "instruction server" (also known as an application server in web-developer terms), and a no-sql database for the instruction server to write files to.
This makes scaling easy, you can always add more web servers, and with a decent cluster size for your no-sql server, you should get to scale horizontally there as well. Your only real bottleneck is your instruction server queue, which you could always throw more instruction servers at.

Architecture and performance issue

I have an question about architecture/performance. I'm talking about a SIP server that processes multiples client requests concurrently. I suppose that each request is treated in a dedicated thread. At the end of the process, the concerned thread log request specific infos in a file. I want to optimize the last part of processing. I mean I want to know what alternatives you propose instead of logging these infos in a file. Why? Because writing in a file after processing uses resources that I would use to process other arriving requests.
First, what do you think about the question? And, if you think that it's a "true" question (I mean that an alternative may optimize the performances), what do you propose?
I thought about logging the data into a queue and to use another process IN ANOTHER MACHINE that would read from the queue and write to a file.
Thanks for your suggestions
If it is NOT a requirement that the log is written before the request returns - i.e. the logging is not part of the atomic response - then you have the option of returning the response and just initiating the logging action.
Putting the logging data in a queue in memory seems reasonable. You can read that queue and write to disk either on the same machine or another. I would start with a thread in your app as this is easiest to implement and since the disk I/O is going to be the limiting factor, it shouldn't impact your server much.
If the log is required to be written BEFORE the response is returned, you still have the option of using a reliable queue like MSMQ.
I suspect that network overhead involved in moving the logging to another machine is problably going to create more problems than it solves. I would go with #Nicholas' solution - queue off the logs to one thread on the same machine. The queue allows slack so that occasional disk latency is mitigated and the logging thread can make its own optimizations, eg. waiting until it has a cluster-size of logs before writing. Other stuff, like opening a new log file every day or whenever the log-file reaches a limiting size are also much easier without affecting the performance of the main server.
Even if you log on another machine, you should still queue off the logging to mitigate network latency.
If the log objects on the queue contain, say, a 'request' enumeration, (eg. ElogWrite, ElogNewFile, ElogPath, ElogShutdown), you could try both - you could queue up a request for the log thread to close its current log file and open a path to a file on a networked machine at runtime - the queue buffer would absorb the delay of doing this.

Resources