I'm new for this technology, can somebody help me to know about some doubt?
Q-1. What is the size of CoAP packet?
(I know there is 4 byte fixed header, but what is the maximum size limit including header, option and payload?)
Q-2. Is there any concept for Keep Alive like MQTT?
(It works on UDP for how much time it keeps open the connection, is there any default time or it keeps open every time when we send packet?)
Q-3. Can we use CoAP with TCP?
(Main problem with it CoAP is it works on UDP, is there any concept like MQTT QoS? Let's say a sensor publishes some data every one second, if subscriber goes offline, is there any surety in CoAP that subscriber will get all the data when it come online?)
Q-4. What is the duration of connection?
(CoAP supports publish/subscribe architecture, may be it needs connection open all the time, is it possible with CoAP whether it is based on UDP.)
Q-5. How does it discover the resources?
(I have one gateway and 5 sensors, how will these sensors connect to the gateway? Will the gateway find these sensors? Or will sensors find the gateway?)
Q-5. How does sensor register with gateway?
Please help me, I really need answer. I'm all new for these kind of things and suggest me something for implementation point of view.
Thanks.
It Depends:
Core CoAP messages must be small enough to fit into their link-layer packets (~ 64 KiB for UDP) but, in any case the RFC states that:
it SHOULD fit within a single IP packet to avoid IP fragmentation (MTU of 1280 for IPv6). If nothing is known about the size of the headers, good upper bounds are 1152 bytes for the message size and 1024 bytes for the payload size;
or less to avoid adaptation layer fragmentation (60-80 bytes for 6LoWPAN networks);
if you need to transfer larger payloads, this IETF draft extends core CoAP with new options for transferring multiple blocks of information from a resource representation in multiple request-response pair (so you can transfer more than 64KiB per message).
I never used MQTT, in any case CoAP is connectionless, requests and responses are exchanged asynchronously over UDP or DTLS. I suppose that you are looking for the observe functionality: it enables CoAP clients to "subscribe" to resources and servers to send updates to subscribed clients over a period of time.
There is an IETF draft describing CoAP over TCP, but I don't know how it interacts with the observe functionality: usually It follows a best-effort approach, it just happens that the client is considered no longer interested in the resource and is removed by the server from the list of observers.
The observe stops when the server thinks that the client is no longer interested in the resource or when the client ask to unsubscribe from the resource.
There is a well-known relative URI "/.well-known/core". It is defined as a default entry point for requesting the list of links about resources hosted by a server. Here for more infos.
Look at 5.
Related
I'm doing research on different communication protocols used in Internet of Things (IoT).
I'm sending "Hello" massage from one Raspberry Pi to another,because I want to measure speed, bandwidth etc.
How could I set massage size? I would like to try sending 32 bytes massages up to 4032 bytes massages.
Edit: I'm testing MQTT, AMQP and SOAP communication protocols.
Check nats.io Benchmark and Tune NATS.
It mainly focuses on nats.io but the output could be used to bench your systems against other protocols.
Here you could find more info about how it is being used for IoT.
Regarding the message size https://nats.io/documentation/faq/#msgsize:
NATS does have a message size limitation that is enforced by the server and communicated to the client during connection setup. Currently, the limit is 1MB.
this is my first question here and I realize this question might be open ended, but I'm looking for specific solutions, and any solution would be accepted.
I have GPS devices which send data packets to an IP on a port, both of which I can configure. I wish to use one of Google's, Amazon's or Microsoft's offering of cloud services. I am using python. Here is an implementation I found online :-
https://github.com/rdkls/gps-tracker-server
The data is coming as packets which are not over HTTP protocol. I have considered building a network listener over a socket on Google Compute Engine, but I'm not sure if it will be able to handle simultaneous requests from 1000 devices if such a situation ever arises. The Google Cloud IoT core offering seems to fit my need perfectly, but it is in private beta right now, which means I can't use it. I think I'll need a message queue service. But most of the offerings from these three companies requires messages over HTTP. Keep in mind that I can't change how the messages are sent from the GPS devices.
The messages sent are in this format -
https://drive.google.com/file/d/0B2EklrIn3KugS2NJYWZGWlVWeGdMbjM4WHQ2TUZmYWhIRmt3/view?usp=drive_web
Format:
data is sent in (byte sized) packets directly to the IP:Port over GPRS connections, one heartbeat packet every minute and GPS details every minute from each device. It also requires teh server to eply to the messagee for acknowledgement since it's not over TCP/IP.
So basically, which service and which architecture should I use keeping scalability, reliability and cost in mind?
I think for a 1000 devices, that would send such messages every minute, total would be 43M messages. I'm not sure but I'm looking for something that'll cost me about 1000$ that is 1$ per device per month.
I need a simple communication protocol between multiple devices (a server having one serial port RS-485 connected to few clients). The Server acts as concentrator and sends requests to a specific client. The client will reply to the requests and send sometime some notifications. The operation in general is asynchronous, data payload is a few hundreds of bytes, distances are too near where the time is none critical.
Is there any standard solution for a such situation? (I need only a clue for a start).
Modbus is a common standard, it involves up to 254 clients with many read/write registers each on one RS485 bus.
But clients can't send notification without these being queried.
I'm currently studying for a project involving a web-server and some raspberrys.
The challenge would basically be to watch raspberry "status" on a web interface.
Raspberrys are connected to the Internet through a GSM connection (mostly 3G).
I'm developping using Node.js on both the clients and the server, and I'd like to use websockets through socket.io in order to watch the raspberry connection status (actually, this is more like watching the raspberry capability to upload data through my application), dealing with "connected" and "disconnected" events.
Would an always-alive websocket connection be reliable for such a use-case?
Are websocket designed-to (or reliable-for) staying opened?
Since this is a hard-testable situation, does anyone know a data-consumption estimate for an always-alive websocket?
If I'm going in a wrong way, does anyone ever worked on such a use-case via another reliable way?
Would an always-alive WebSocket connection be reliable for such a use-case ? Are WebSocket designed-to (or reliable-for) staying opened ?
Yes, WebSocket was designed to stay open and yes it's reliable for your use-case, a WebSocket connection is just a TCP connection which transmits data in frames.
Since this is a hard-testable situation, does anyone know an data-consumption estimate for an always-alive websocket ?
As I wrote, data in WebSocket connections is transmitted using frames, each frame has a header and the payload. The data sent from the client to the server is always masked and like this adds 4 bytes (the masking key) to each frame. The length of the header depends on the payload length:
2 bytes header for <=125 bytes payload
4 bytes header for <=65535 bytes payload
10 bytes header for <=2^64-1 bytes payload (rarely used)
Base Framing Protocol: https://www.rfc-editor.org/rfc/rfc6455#section-5.2
To keep the connection open, the server sends at a specific timeout (depends on the implementation, usually ~30 seconds) ping frames which are 2-127 bytes long, usually 2 bytes (only the header, without payload) and the client responds with pong frames which are also 2-127 bytes long.
I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.