azure proxy call and same back end function call behaving differently - azure

i am facing an issue with azure proxy call.
Created a azure function app.
Created a basic get function and sending response body in json format back.
response body example{url: "https://www.google.com"}
If i configure direct function endpoint in one of my company application everything working fine able launch response url coming from azure function call.
I created proxy for the same get function and configured proxy end point in my application.Now application is failing to launch response url.(but back end same function is hitting and logs created no error in logs and ended with status 200)
Unfortunately i don't have control on application code to verify what exact cause in response.
I verified azure function call and proxy call from postman both are giving same response body. I don't know why it is failing in my application
i didn't understand.
One more point i verified response headers in postman in both cases
For function app end point call response headers:
content-type →application/json; charset=utf-8
date →Wed, 09 Jan 2019 12:39:23 GMT
server →Microsoft-IIS/10.0
transfer-encoding →chunked
x-powered-by →ASP.NET
For proxy endpoint call response headers:
content-encoding →gzip
content-length →208
content-type →application/json; charset=utf-8
date →Wed, 09 Jan 2019 12:41:16 GMT
server →Microsoft-IIS/10.0
vary →Accept-Encoding
x-powered-by →ASP.NET, ASP.NET
is gzip encoding creating problem in proxy call. How to disable it in azure proxy.
my application should able to launch even if i use proxy end point.

Related

Not getting response from AWS after uploading image through ESP32

General context:
I am working on an IoT application where I upload images from an ESP32 connected to an SBC.
The uploading is done through an API provided by a third-party backend developer.
The upload API works through other mediums (such as Postman, python requests library, python http client library)
The ESP32 is connected to the SBC through UART.
I construct/generate the HTTP request on the SBC and send it as bytes. I have written a function on ESP32 that can send the bytes as a generic HTTP request, to the URL specified.
Then it sends the response string back to the SBC.
All of this works. For small requests, I am facing no issues. I am able to download images, etc.
However, when uploading an image, I don't get a response and I end up timing out after 30s. I checked without timeout FYI, but no response.
I checked from the server-side. It appears my request has succeeded and the server is sending me 200 with the URL of the image. Using that URL, I was able to verify that the image was uploaded successfully.
However, I do not receive this response on the microcontroller.
Not sure what the issue is. Any suggestions as to what I can do?
I can't give out the code but I'll send a general structure:
ESP32
-> Receives URL, port, length of request
-> Connects to server and reads the request from UART and writes to server
-> Wait for response after response is sent
Python raw http
POST (server path) HTTP/1.1
Host: (url)
correlation-id: test5
Content-Type: multipart/form-data; boundary=WebKitFormBoundary7MA4YWxkTrZu0gW
Authorization: Bearer (access token)
Content-Length: 268
--WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="portraits"; filename="name"
Content-Type: image/jpeg
(data)
--WebKitFormBoundary7MA4YWxkTrZu0gW--
Edit 1:
So, turns out it is not only "upload image", some other requests are also behaving similarly. Our server has many microservices. The services written in nodeJS which have more than 1 redirects are not working...?
I figured out what the issue is and hopefully, it will help anyone else facing the same issue. Also, some of my requests to the backend server which used a different authentication method worked
I had been generating the raw HTTP using postman code generation but it turns out Postman doesn't add a few headers which are needed for communicating with more complex servers.
What I mean is that if I host a local server, the above code will work. I had already tested it that way
What solved my problem is adding these headers:
POST (server path) HTTP/1.1
Host: (server URL)
User-Agent: ESP32
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
correlation-id: test
Authorization: Bearer (access_token)
Content-Length: 146360
Content-Type: multipart/form-data; boundary=af59ef02d60cd0efefb7bc03db1f4ffc
--af59ef02d60cd0efefb7bc03db1f4ffc
Content-Disposition: form-data; name="portraits"; filename="(name)"
Content-Type: image/jpeg
(data)
--af59ef02d60cd0efefb7bc03db1f4ffc--

Can't send cookies with nest.js backend on azure web service (serverless)

I have troubles gettting cookies to work in my production environment which uses a Nest.js API Backend on Azure WebServices (with functions). I basically followed this guide: https://dev.to/azure/build-your-first-serverless-app-with-angular-nestjs-and-azure-108h
I also added an angular frontend app, also hosted on Azure. But as far as my testing concerns this does not matter.
I want to add authorization by JWT stored in a cookie. However, the Backend does not add the cookie to the header in production. In development (localhost) everything works like a charm. It works with the local/dev frontend calling the local/dev backend as well as when I use the VSC Rest Client to just call the API.
However in production I won't receive the cookie nor other header I test. I configured CORS in Azure and this kind of looks good as I do get a HTTP200 and not a CORS Error back. Just the header info is missing.
I already read a lot of advice, but none helped. I do set 'width credentials' (also in azure). Do you experts have any advice what to try or what might be the problem?
Thanks
Controller for testing purposes ...
#Get("server-check")
#HttpCode(HttpStatus.OK)
#Header('Set-Cookie', 'cookieName = 12345; secure; SameSite=none"') // "Usin header decorator"
#Header('Access-Control-Expose-Headers', 'set-cookie, authorization') //
async serverCheck(
#Res() response: Response) {
response.cookie('rememberme', '1') // Using express res object.
return response.send('Cookie has been set! :)')
}
Response from Production env (no cookie in application tab)
Screen from Network-Response in Browser
Response from REST call in VSC
HTTP/1.1 200 OK
Connection: close
Date: Thu, 05 May 2022 11:51:27 GMT
Transfer-Encoding: chunked
Request-Context: appId=cid-v1:b8d7a5c0-962f-40ec-b128-b47139939cf4
Cookie has been set! :)

What's the purpose of setting http headers and setting appropriate status codes?

I understand headers are used to convey additional information between client and a server. I recently developed a small web-app in node for learning and have not updated any response status codes in headers manually.
For any get request I simply send back the appropriate response file which could be an ejs or html file, without bothering to update the headers. Well the app works just fine but now I see lot of other codes where the headers are updated before sending the response.
For e.g, res.writeHead(200, {'Content-Type': 'text/html'});
Whom does it help? Is it for debugging ? What's the big picture I am missing?
Let's start with an example of an http response borrowed from MDN:
HTTP/1.1 200 OK
Date: Sat, 09 Oct 2010 14:28:02 GMT
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: 29769
Content-Type: text/html
<!DOCTYPE html... (here come the 29769 bytes of the requested web page)
In this HTTP response, there are three parts to it. The first line gives you the protocol and the status. The next block of lines (up until an empty line) are the headers. Then, follows the (optional) content itself.
The status is a required part of an http response as it's sent with every response and informs the recipient whether this is a normal response with content (200) or a redirect (3xx) or an error (4xx or 5xx). Within nodejs, the status will generally default to 200 so if you don't set it, it will automatically be set to 200.
Http headers are used by the recipient for various things. What they are used for depend upon which header you're talking about. Here are a few examples:
Content-Type - Tells the recipient what type of content is in the body of the response. If they don't know the type of the content, then they have to guess and try to figure it out and that's far, far, far less desirable than telling them whether the content is meant to be text/plain or application/json or text/html or image/jpeg, etc...
Content-Length - Tells the recipient how long the content is in bytes. This is required when the content's format does not, by itself, tell the recipient where the end of the response is.
Transfer-Encoding - Tells the recipient how the body of the response is encoded for transfer (for example gzip).
Cache-Control - Gives the recipient information about how long this response can be cached.
Set-Cookie - Sends the recipient a cookie for the client to save and send back with future requests to this origin.
Location - A URI that goes with a 3xx status to indicate where the client should redirect to.
These were just a few examples. There are probably thousands of possible headers, each with their own purpose.
Whom does it help?
It's used by the code that is receiving the http response to know how to interpret it.
Is it for debugging?
That's not its main purpose, but when deubgging a problem, you may look at the detailed headers to see if they are what you expect them to be. For example, if req.body is empty in Express when you're expecting it to be populated, you would look at the content-type to see if it's what you were expecting and matches something you have middleware installed for to read and parse and put into req.body because if the content-type isn't something you expected, then your code that receives this response wouldn't be configured properly to handle it.
What's the big picture I am missing?
Headers are meta data that describes what's in the response and gives the recipient information that is often necessary for knowing how to read the response properly.

Azure Logic App - Receive file from http request

I have an ASP.Net handler that returns a PDF report. I want the Azure Logic App to request the file and then add it to an e-mail as an attachment.
When I try to do this through an HTTP request I get the following error:
BadRequest. Http request failed as there is an error: 'Error while copying content to a stream.'
If I make the request with a browser I get a HTTP 200 response and it works. See request/response headers from chrome and fiddler.
I'm sure I could solve this with an Azure Function to get the file blob and pass it to the e-mail stage but it appears in the documentation that Logic Apps can handle streams and base64 encode. Am I missing something here?
I tried with the following a static result in an HTTP request to mimic the HTTP request/stream as much as possible. I guess it comes down to that you need to design the body of the stream in a way the includes content and content-type like I did below with my mockup HTTP request
pdf content
content-type: application/pdf and application/octet-stream worked
Send and email action:
Sent email
Outlook result:

Receiving webhook in express from local application

I have an application (headless CMS) running locally. It has an option to send a webhook to another application. I have been trying to interpret this webhook via express using posts. I have not been able to even get it to register a request coming in from the application. I tested this route using postman and found that it is working when I post to it.
router.post('/', (req, res) => {
console.log("Recieved");
console.log(req.body);
res.status(200).send('ok')
});
Thus, when I send a post to: http://localhost:3000/recall via postman. I get the following header back:
Access-Control-Allow-Credentials →true
Access-Control-Allow-Headers →X-Requested-With,content-type
Access-Control-Allow-Methods →GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Origin →http://localhost:3000
Connection →keep-alive
Content-Length →2
Content-Type →text/html; charset=utf-8
Date →Mon, 03 Sep 2018 21:23:22 GMT
ETag →W/"2-eoX0dku9ba8cNUXvu/DyeabcC+s"
X-Powered-By →Express
With the body:
ok
My script also prints the body of the post.
I can verify the webhook is working by testing it with request bin. I get back the following:
FORM/POST PARAMETERS
None
HEADERS
Cloudfront-Forwarded-Proto: http
Cloudfront-Is-Mobile-Viewer: false
Cloudfront-Is-Desktop-Viewer: true
Connect-Time: 1
Via: 1.1 3566cbcd49f71967b52a565888e4d272.cloudfront.net (CloudFront), 1.1 vegur
Content-Length: 387
Connection: close
Accept: */*
Content-Type: application/json
Cloudfront-Viewer-Country: US
X-Amz-Cf-Id: dRe5CvkLFJZJNcpZbhmeEHo0ar_taj6guvN8utwkyVXM7ZMJc5BZTw==
Cloudfront-Is-Smarttv-Viewer: false
X-Request-Id: 4b6d2cdc-5c45-495b-b358-2e808e1bfeb4
Cloudfront-Is-Tablet-Viewer: false
Total-Route-Time: 0
Host: requestbin.fullcontact.com
BODY
{"event":"singleton.remove","hook":"Save After Sington","backend":1,"args":[{"name":"Wonder","label":"Wonder","_id":"Wonder5b8cef36a0097","fields":[{"name":"Best","label":"","type":"text","default":"","info":"","group":"","localize":false,"options":[],"width":"1-1","lst":true,"acl":[]}],"template":"","data":null,"_created":1535962934,"_modified":1535962934,"description":"","acl":[]}]}
I tried enabling cross-origin requests. How could I fix this problem? My thought is it has something to do with the fact this request is originating and ending locally.
For an application to consume webhooks it needs to have a publicly accessible URL. Basically, the rest of the world (internet) doesn't know your localhost:3000 endpoints exist.
An easy way to fix this is to use a lightweight tool like ngrok to expose your local server; in turn allowing other applications to communicate with yours.
You will need to define the specific callback route that you want to consume the webhook POST request. Examples below.
Run your node script
Turn on ngrok
send webhook POST requests to your endpoint using the NGROK https address
Now, instead of sending your webhook to localhost:8000/MyWebhookConsumingEndpoint
you send it to
https://95e26af4.ngrok.io/MyWebhookConsumingEndpoint

Resources