Azure TCP segmentation and HTTP chunks - azure

I have an Azure application that connects to a large installed base of devices. Everything has been working fine for years until today when everything has stopped working. What I think has happened is Azure is now segmenting the small (87 byte payload) message and this has exposed a bug in my TCP handler.
Does anyone know if there is a way of forcing Azure not to segment small TCP messages?
Follow up - I think this is because the HTTP message is 'chunked' and send as 2 TCP segments. There is a bug in my code that does not handle chunks which as only now surfaced.
Can I turn off chunking in Azure?

If your question is whether there is a way to prevent a HTTP response from using HTTP Chunked Transfer Encoding, then perhaps, depending on the server-side APIs you are using. Setting a Content-Length header usually suffices.
But I think your question is whether you can force Azure to buffer IP packets in a way that avoids the bug in your client, which is assuming that a single socket Read() will return a complete message. In which case, no. It may not even be within Azure's control, as any intermediate router could cause a delay in the delivery of packets, which, in turn, will cause the client's Read() to return a partial message.

I found a solution here Disable chunking in Asp.Net Core
In summary:
response.Headers["Content-Encoding"] = "identity";
response.Headers["Transfer-Encoding"] = "identity";
This seems to disable unnecessary chunking. I have no idea how

Related

Communicating with an unsecure device: Security by Abstraction Vs HTTP HTTPS callback

I have a web-server with an SSL certificate, and an unsecured device on a GSM/GPRS network (arduino MKR GSM 1400). The MKR GSM 1400 library does not feature a SSL server, only an SSL Client. I would prefer to use a library if that's possible, but I don't wanna write a SSL Server class. I am considering writing my own protocol, but I'm familiar with HTTPS and will make writing the interface on the webserver side easier.
The GSM Server only has an SSL Client
I am in control of both devices
Commands are delivered by a text string
Only the webserver has SSL
My C skills are decent at best
I need the SSL Server to be able to send commands to the Arduino Device, but I want these commands to be secured (The arduino device opens and closes valves in a building).
The other option would maybe have some sort of PSK, but I wouldn't know where to start on that. Is there an easy function to encrypt and decrypt a "command string". I also don't want "attackers" to be sending commands that I've sent before.
My Basic question is, does this method provide some reasonable level of security? Or is there some way to do this that I'm not thinking of.
While in a perfect world there would be a better approach, you are currently working within the limits of what your tiny system provides.
In this situation I find your approach reasonable: the server simply tells the client using an insecure transport that there is some message awaiting (i.e. sends some trigger message, actual payload does not matter) and the client then retrieves the message using a transport which both protects the message against sniffing and modification and also makes sure that the message actually came from the server (i.e. authentication).
Since the trigger message from the server contains no actual payload (arrival of the message itself is enough payload) an attacker could not modify or fake the message to create insecure behavior in the client. The worst what could happen is that some attacker will either block the client from getting the trigger messages or that the attacker fakes trigger messages even though there is no actual command waiting from the server.
If the last case is seen as a problem it could be dealt with a rate limit, i.e. if server did not return any command although the client received a trigger message than the client will wait some minimum time before contacting the server again, no matter if a trigger message was received or not. The first case of the attacker being able to block messages from the server is harder to deal with since in this case the attacker is likely able to block further communication between client and server too - but this is a problem for any kind of communication between client and server.

Handle different UDP message types in Nodejs

I'm writing a application in NodeJs where a client sends udp messages to a server with udp. I'm trying to find out how people normally handle different message types in NodeJs but can only find tons of examples of echo servers where the kind of message is not relevant.
The only example I have found so far is https://github.com/vbo/node-webkit-mp-game-template/tree/networking_1/networking
Maybe the best way is to send the udp messages as json?
User Datagram Protocol (UDP) is a network protocol and mechanism for sending short messages from one host to another without any guarantee of delivery. What you put in the message is entirely up to you.
While JSON can be used to encode your message, it suffers from two problems: it is not secure and is self-describing.
The first problem means that bad actors can easily see the content of your message while in flight and the second implies a substantial overhead for any message above and beyond its intended purpose.
Depending on your needs, a better choice might be to define your own binary protocol specific to your purpose using a node Buffer.
Another might be to use a more compact interchange format like thrift.

What is the size of CoAP packet?

I'm new for this technology, can somebody help me to know about some doubt?
Q-1. What is the size of CoAP packet?
(I know there is 4 byte fixed header, but what is the maximum size limit including header, option and payload?)
Q-2. Is there any concept for Keep Alive like MQTT?
(It works on UDP for how much time it keeps open the connection, is there any default time or it keeps open every time when we send packet?)
Q-3. Can we use CoAP with TCP?
(Main problem with it CoAP is it works on UDP, is there any concept like MQTT QoS? Let's say a sensor publishes some data every one second, if subscriber goes offline, is there any surety in CoAP that subscriber will get all the data when it come online?)
Q-4. What is the duration of connection?
(CoAP supports publish/subscribe architecture, may be it needs connection open all the time, is it possible with CoAP whether it is based on UDP.)
Q-5. How does it discover the resources?
(I have one gateway and 5 sensors, how will these sensors connect to the gateway? Will the gateway find these sensors? Or will sensors find the gateway?)
Q-5. How does sensor register with gateway?
Please help me, I really need answer. I'm all new for these kind of things and suggest me something for implementation point of view.
Thanks.
It Depends:
Core CoAP messages must be small enough to fit into their link-layer packets (~ 64 KiB for UDP) but, in any case the RFC states that:
it SHOULD fit within a single IP packet to avoid IP fragmentation (MTU of 1280 for IPv6). If nothing is known about the size of the headers, good upper bounds are 1152 bytes for the message size and 1024 bytes for the payload size;
or less to avoid adaptation layer fragmentation (60-80 bytes for 6LoWPAN networks);
if you need to transfer larger payloads, this IETF draft extends core CoAP with new options for transferring multiple blocks of information from a resource representation in multiple request-response pair (so you can transfer more than 64KiB per message).
I never used MQTT, in any case CoAP is connectionless, requests and responses are exchanged asynchronously over UDP or DTLS. I suppose that you are looking for the observe functionality: it enables CoAP clients to "subscribe" to resources and servers to send updates to subscribed clients over a period of time.
There is an IETF draft describing CoAP over TCP, but I don't know how it interacts with the observe functionality: usually It follows a best-effort approach, it just happens that the client is considered no longer interested in the resource and is removed by the server from the list of observers.
The observe stops when the server thinks that the client is no longer interested in the resource or when the client ask to unsubscribe from the resource.
There is a well-known relative URI "/.well-known/core". It is defined as a default entry point for requesting the list of links about resources hosted by a server. Here for more infos.
Look at 5.

Response body missing characters

I've seen this issue happen on multiple machines, using different languages and server-side environments. It seems to always be IIS, but it may be more widespread.
On slower connections, characters are occasionally missing from the response body. It happens somewhere between 25% and 50% of the time but only on certain pages, and only on a slow connection such as VPN. A refresh usually fixes the issue.
The current application in question is .NET 4 with SQL Server.
Example:
<script>
document.write('Something');
</script>
is being received by the client as
<scrit>
document.write('Something');
</script>
This causes the JavaScript inside the tag to instead be printed to the page, rather than executing.
Does anyone know why this occurs? Is it specific to IIS?
Speaking generally, the problem you describe would require corruption at the HTTP layer or above, since TCP/IP has checksums, packet lengths, sequence numbers, and re-transmissions to avoid this sort of issue.
That leaves:
The application generating the data
Any intermediate filters between the application and the server
The HTTP server returning the data
Any intermediary HTTP proxies, transparent or otherwise
The HTTP client requesting the data
The user-agent interpreting the data
You can diagnose further based off of a network capture performed at the server edge, and at the client edge.
Examine the request made by the client at the client edge to verify that the client is making a request for the entire document, and is not relying upon cache (no Range or If-* headers).
If the data is correct when it leaves the server (pay particular attention to the Content-Length header and verify it is a 200 response), neither the server nor the application are at fault.
If the data is correct as received by the client, you can rule out intermediary proxies.
If there is still an issue, it is a user-agent issue
If I had to psychically debug such a problem, I would look first at the application to ensure it is generating the correct document, then assume some interloper is modifying the data in transit. (Some HTTP proxy for wan-acceleration, aggressive caching, virus scanning, etc...) I might also assume some browser plugin or ad blocker is modifying the response after it is received.
I would not, however, assume it is the HTTP server without very strong evidence. If corruption is detected on the client, but not the server, I might disable TCP Offload and look for an updated NIC Driver.

Camel inout with very long response times

We have the following scenario that we would like to solve using Apache Camel:
An asynchronous request arrives to an AMQP endpoint configured in Camel. This message contains a header property for a reply-to that should be used for the response. Camel must pass this message to another service using JMS and then route the response back to the reply-to queue from the AMQP request. This seems like a textbook example for using the InOut functionality in Camel but we have one problem: The reply from JMS service could take a long time, in some cases several days.
As I understand it, if we are using InOut it would mean that we would lock a thread to the the long running service. If we are unlucky, we could get several long running calls simultaneously and in the worst case scenario it could be that all threads are busy waiting for replies thus clogging the system.
What strategy should I use for solving the problem described above? At the moment, I have created to separate routes: One that listens to the AMQP endpoint and forwards the message to the JMS endpoint. The other route listens to the replyto-queue for the jms system and would be responsible for sending the reply back to the AMQP reply-to. The problem I have right now is how I should store the AMQP reply-to between these two routes and I am not sure this is a good solution overall for this problem.
Any tips or ideas on how to solve this problem would be greatly appreciated.
If you have to wait for more than a minute for reply, it's probably a good thing to treat the reply as async. and create separate request and response routes.
Since you mention several days, you might even want to survive an application restart (or even backup-restore) to correlate the response. In such cases, you need to store correlation information in a persistent store such as a database or a JMS queue using message properties - and selectors to retrieve the correlation information back.
I've used both queues and databases for long time request/reply correlation information with success.
It's always a good practice to be able to fail over/restart the server or the application at any time knowing that any ongoing processing will take up where it left off without errors.
There is a cost in complexity and performance, but robustness is often perferred to performance.

Resources