When APIGEE is trying to hit API which is deployed in Azure as function app which has app gateway integrated, but when the request is coming it is stripping the request body. as the result we are getting 400 bad request we are clueless what is happening here, any help would be welcomed
• As per your explanation, you are sending a POST request to the application gateway in Azure behind which the function app is deployed. Thus, when the request from Apigee hits the application gateway endpoint, the information in body of the request gets eaten or disappeared. This happens with specific API proxies only, in this case, the apigee request sent is considering the application gateway deployed as an unacceptable API proxy. Thus, I would request you to please take the following steps: -
a) Gather a TCP dump when the API request was made with NGINX/Wireshark logs and analyze it to confirm the target port as 443.
b) Check any message being sent from the message processor to the backend server and check any message that states as below in the logs: -
[Unencrypted HTTP protocol detected over encrypted port, could indicate a dangerous misconfiguration.]
c) And if the target server definition was created without ‘SSLInfo’ section, then the above message is encountered in the logs. Then update the target server definition with the correct ‘SSLInfo’ section to make it secure.
d) Then restart both the ‘Message Processors’ on the apigee as well as the Azure side to get the latest definition of the target server.
e) Also, check if the encoding specified in the HTTP request header ‘Content-Encoding’ is valid or not and the payload format sent by the client as a part of the HTTP request matches the encoding format specified in the ‘Content-Encoding’ header or not.
• The error that you are encountering is a result of the above stated scenario. As a result, you can fix the issue by sending the request header as Content-Encoding:<payload format> and the request payload also in the same format. An example of doing the same is given below: -
curl -v "https://HOSTALIAS/v1/testgzip" -H "Content-Encoding: gzip" -X POST -d #request_payload.gz
For more details regarding the above, kindly refer to the documentation link below: -
https://docs.apigee.com/api-platform/troubleshoot/runtime/400-decompressionfailureatrequest
Related
I would like to send updates of a running HTTP request to the client to tell it at what stage the request-triggered process currently is.
The process behind the request does currently the following things (in this order):
Client-side:
Client sends an HTTP Request (upload of a file) to the server
Server-side:
Takes the uploaded file
Encrypt it
Upload it to an archive storage
Return response to the client
(Meanwhile, the client does not know what currently happens)
Client-side:
Get response and show it to the user
I want to tell the client at what stage the process is, like “Uploading done. Encrypting…” and so on.
Is there a way to realize that, or am I missing something? Is it even possible to do?
Frameworks I'm using:
Client: Next.js
Server: Hapi.dev for API development
Thanks
You can send non-final 1xx header responses for your http request as described here.
I have allowed chunking in HTTP connection in LogicApp but still getting error: BadRequest. Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'.
Please find below the screenshots as well for details.
HTTP connector of LogicApp
HTTP connector setting
I am assuming, the request made by the HTTP connector is returning a good amount of data which requires chunking. According to this documentation, the endpoint which you are sending the request to, needs to send partial data which would enable HTTP connector to use chunking to download the whole data.
To download chunked messages from an endpoint over HTTP, the endpoint must support partial content requests, or chunked downloads.
Logic Apps can't control whether an endpoint supports partial requests. However, when your logic app gets the first "206" response, your logic app automatically sends multiple requests to download all the content.
Also, this might be helpful. I came across this thread, while facing a similar problem in the SFTP connector.
I have built a simple Python/Flask app for sending automatic messages in Slack and Telegram after receiving a post request in the form of:
response = requests.post(url='https://my-cool-endpoint.a.run.app/my-app/api/v1.0/',
json={'message': msg_body, 'urgency': urgency, 'app_name': app_name},
auth=(username, password))
, or even a similar curl request. It works well in localhost, as well as a containerized application. However, after deploying it to Cloud Run, the requests keep resulting in the following 503 Error:
POST 503 663 B 10.1 s python-requests/2.24.0 The request failed because either the HTTP response was malformed or connection to the instance had an error.
Does it have anything to do with a Flask timeout or something like that? I really don't understand what is happening, because the response doesn't take (and shouldn't) take more than a few seconds (usually less than 5s).
Thank you all.
--EDIT
Problem solved after thinking about AhmetB reply. I've found that I was setting the host as the public ip address of the SQL instance, and that is not the case when you post it to Cloud Run. For that to work out, you must replace host by unix_socket and then set its path.
Thanks you all! This question is closed.
We have a few APIs that are being long polled through Azure API Management. For some reason, we are receiving a response of 200 [not sent in full (see exception telemetries)] and then a System.Exception: A task was canceled. exception in App Insights.
Taking a look at the server app service telemetry, the requests were completed without any exception there.
Can anyone help me figure out what this status response means and why are we getting this exception?
These errors mean that APIM started to send response to client, sent status code and description, and some portion of headers and body. These traces must be accompanied by exception telemetry as response code suggests. Depending on what you see there it may be:
Client connectivity error - client terminated connection before response was sent in full
Backend connectivity error - backend terminated connection before providing full response
The reasons for both may vary a lot, but given small duration I'd suspect that it's client closing connection. One of the reasons for this, for example, is if this API is used from browser it is normal for browser to terminate connection and abort reading response if user navigates away from page that made the call.
Overview
I'm using Angular 6 as a front end for an web application which will communicate with the REST API developed in NodeJs. I've an Issue that the Preflight request takes long time than the normal request.
Detail
My frontend Angular 6 application communicates with the REST API to get data from the Database and display it to the user. I'm sending the request via HttpClient to the REST API.
In my REST API developed in NodeJs all CORS configurations are done correctly and the preflight request are successfully sent and the actual request are processed perfectly in local development machine.
The issue what I've is that, when I deploy the application in the production machine the options (preflight) request takes more time than the actual GET / POST request (See Image attached). As you can see the Actual GET request takes only 239ms while the preflight (OPTIONS) request takes 656ms, which is almost 275% more than normal. This happens in all HTTP request which in turn affects my website performance.