Why do I get HTTP/1.1 503 Service Unavailable when sending messages to the HTTP protocol adapter? - eclipse-hono

I keep getting the error code 503 all the time when I publish telemetry data to the HTTP protocol adapter of Eclipse Hono:
$ curl -i -u sensor1#DEFAULT_TENANT:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' http://hono.eclipse.org:8080/telemetry
HTTP/1.1 503 Service Unavailable
retry-after: 2
content-type: text/plain; charset=utf-8
content-length: 23
temporarily unavailable
What would probably be the reason?

Usually, when experimenting with Hono it is easy to forget to start a consumer before sending telemetry or event messages.
From Hono's homepage:
If you haven’t started the application you will always get 503 Resource Unavailable responses because Hono does not accept any telemetry data from devices if there aren’t any consumers connected that are interested in the data. The reason for this is that Hono never persists Telemetry data and thus it doesn’t make any sense to accept and process telemetry data if there is no consumer to deliver it to.
It should also be noted that the consumer must be subscribed to the corresponding message type. A consumer can receive either telemetry or event messages, or both. The type of message sent must match the type of consumer.

Related

Azure Application Gateway is striping request body when it comes from APIGEE

When APIGEE is trying to hit API which is deployed in Azure as function app which has app gateway integrated, but when the request is coming it is stripping the request body. as the result we are getting 400 bad request we are clueless what is happening here, any help would be welcomed
• As per your explanation, you are sending a POST request to the application gateway in Azure behind which the function app is deployed. Thus, when the request from Apigee hits the application gateway endpoint, the information in body of the request gets eaten or disappeared. This happens with specific API proxies only, in this case, the apigee request sent is considering the application gateway deployed as an unacceptable API proxy. Thus, I would request you to please take the following steps: -
a) Gather a TCP dump when the API request was made with NGINX/Wireshark logs and analyze it to confirm the target port as 443.
b) Check any message being sent from the message processor to the backend server and check any message that states as below in the logs: -
[Unencrypted HTTP protocol detected over encrypted port, could indicate a dangerous misconfiguration.]
c) And if the target server definition was created without ‘SSLInfo’ section, then the above message is encountered in the logs. Then update the target server definition with the correct ‘SSLInfo’ section to make it secure.
d) Then restart both the ‘Message Processors’ on the apigee as well as the Azure side to get the latest definition of the target server.
e) Also, check if the encoding specified in the HTTP request header ‘Content-Encoding’ is valid or not and the payload format sent by the client as a part of the HTTP request matches the encoding format specified in the ‘Content-Encoding’ header or not.
• The error that you are encountering is a result of the above stated scenario. As a result, you can fix the issue by sending the request header as Content-Encoding:<payload format> and the request payload also in the same format. An example of doing the same is given below: -
curl -v "https://HOSTALIAS/v1/testgzip" -H "Content-Encoding: gzip" -X POST -d #request_payload.gz
For more details regarding the above, kindly refer to the documentation link below: -
https://docs.apigee.com/api-platform/troubleshoot/runtime/400-decompressionfailureatrequest

Still getting maximum buffer size error after allow chunking in HTTP connector settings LogicApps

I have allowed chunking in HTTP connection in LogicApp but still getting error: BadRequest. Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'.
Please find below the screenshots as well for details.
HTTP connector of LogicApp
HTTP connector setting
I am assuming, the request made by the HTTP connector is returning a good amount of data which requires chunking. According to this documentation, the endpoint which you are sending the request to, needs to send partial data which would enable HTTP connector to use chunking to download the whole data.
To download chunked messages from an endpoint over HTTP, the endpoint must support partial content requests, or chunked downloads.
Logic Apps can't control whether an endpoint supports partial requests. However, when your logic app gets the first "206" response, your logic app automatically sends multiple requests to download all the content.
Also, this might be helpful. I came across this thread, while facing a similar problem in the SFTP connector.

Azure API management gives 200 [not sent in full (see exception telemetries)]

We have a few APIs that are being long polled through Azure API Management. For some reason, we are receiving a response of 200 [not sent in full (see exception telemetries)] and then a System.Exception: A task was canceled. exception in App Insights.
Taking a look at the server app service telemetry, the requests were completed without any exception there.
Can anyone help me figure out what this status response means and why are we getting this exception?
These errors mean that APIM started to send response to client, sent status code and description, and some portion of headers and body. These traces must be accompanied by exception telemetry as response code suggests. Depending on what you see there it may be:
Client connectivity error - client terminated connection before response was sent in full
Backend connectivity error - backend terminated connection before providing full response
The reasons for both may vary a lot, but given small duration I'd suspect that it's client closing connection. One of the reasons for this, for example, is if this API is used from browser it is normal for browser to terminate connection and abort reading response if user navigates away from page that made the call.

Google App Engine Nodejs Bad Gateway Error

We have a project hosted on Google App Engine in its Node.js Flexible Environment to collect data from sensors.
We receive about 10 POST /collect requests/second that can be of very different sizes, but 99% of the times are really small requests (~100B up to ~12MB).
Looking at the collected data, we see that every once in a while (like 5-6 times a day, apparently) we miss some data.
While investingating, we put a proxy (still on App Engine), let's call it PROXY in front of our server, let's call it SERVER, in order to be able to track the full flow and see all the errors and problems we could encounter.
We noticed that, when the data is missing, PROXY has sent the data to SERVER and received back 502 Bad Gateway, and this appears in PROXY's logs (in the proxy we print when the request arrives to the proxy and when the server replies to the proxy):
07:11:15.000 SENSOR_ID response: 502 Bad Gateway
07:11:15.000 SENSOR_ID request
We then went through the SERVER's logs and discovered that, at the same timestamp, we get the following:
07:11:15.000 [error] 32#32: *84209 upstream prematurely closed connection while reading response header from upstream, client: 130.211.1.151, server: , request: "POST /collect HTTP/1.1", upstream: "http://172.17.0.1:8080/collect", host: "ourprojectid.appspot.com"
Our first assumption was that big requests, with lots of data, caused the server to fail for whatever reason, but this is not the case, and instead there is no correlation between these failure events and the size of the request.
Stack we are using: App Engine instances (run on nginx) and Nodejs Flexible Environment.
We do not have any clue where to investigate further.

Azure is forcing client to send HTTP Expect 100 Continue with large data requests

I have an Azure Cloud Service with WorkerRole that accepts https requests.
IT seems that Azure magic is forcing client to send Expect 100 Continue in an https request header if the payload is > 50KB.
If you send request with data less than 50KB to an Azure https endpoint the server returns the response otherwise the request is timed-out. If Expect 100 Continue is added to the request which is > 50KB the request is accepted.
Any idea why and how to disable this feature?
It's actually the client that is in control of this. Your client implementation must be sending an HTTP header like so:
Expect: 100-continue
Otherwise the server wouldn't bother replying with a 100 Continue status.
If you do not want to use this feature of HTTP/1.1 then simply stop sending the header from your client. In .NET it's turned on by default and you can turn it off for all HttpWebRequests within an AppDomain using this static property:
ServicePointManager.Expect100Continue = false;

Resources