Azure Staging Environment "No 'Access-Control-Allow-Origin' header is present on the requested resource" - azure

I've inherited a web-app hosted in Azure and it appears the staging environment CORS is not functioning like all other environments.
I'm fairly new to Azure administration and I've applied the CORS configuration on the API app service (previously had no config set) which did not resolve the issue.
You can compare the different environment general/response headers below.
Local Dev...
Request URL: http://localhost:3001/api/getstarted/plans?fetchExcluded=true
Request Method: GET
Status Code: 200 OK
Remote Address: [::1]:3001
Referrer Policy: strict-origin-when-cross-origin
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Encoding: gzip
Content-Type: application/json; charset=utf-8
Date: Thu, 04 Nov 2021 21:55:22 GMT
Keep-Alive: timeout=5
Strict-Transport-Security: max-age=15552000; includeSubDomains
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Current Production...
Request URL: https://api.<webapp>.app/api/getstarted/plans?fetchExcluded=true
Request Method: OPTIONS
Status Code: 204 No Content
Remote Address: ##.###.###.###:443
Referrer Policy: strict-origin-when-cross-origin
Access-Control-Allow-Headers: authorization
Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE
Access-Control-Allow-Origin: *
Date: Thu, 04 Nov 2021 22:04:19 GMT
Strict-Transport-Security: max-age=15552000; includeSubDomains
Vary: Access-Control-Request-Headers
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
However in staging...
Request URL: https://<webapp>.azurewebsites.net/api/getstarted/plans?fetchExcluded=true
Request Method: OPTIONS
Status Code: 503 Service Temporarily Unavailable
Remote Address: ##.###.###.###:443
Referrer Policy: strict-origin-when-cross-origin
Content-Length: 398
Date: Thu, 04 Nov 2021 21:56:02 GMT
Server: nginx
and from debug console in staging...
- Access to XMLHttpRequest at
'https://<webapp>.azurewebsites.net/api/getstarted/plans?fetchExcluded=true'
from origin 'https://<webapp>.z8.web.core.windows.net' has been
blocked by CORS policy: Response to preflight request doesn't pass
access control check: No 'Access-Control-Allow-Origin' header is
present on the requested resource. xhr.js:178
- GET
https://<webapp>-api-staging.azurewebsites.net/api/getstarted/plans?fetchExcluded=true
net::ERR_FAILED
I've checked the Azure Subscription usage + quotas, and the only thing nearing quota is "Network Watchers"
The API framework is ExpressJS with CORS...
app.use(cors())

In my opinion the issue has nothing to do with CORS, well, better said,...
As you can see in the staging environment you are receiving a 503, service unavailable, HTTP status error code:
Status Code: 503 Service Temporarily Unavailable
So my guess is that the staging environment is not working properly by some reason, and the associated load balancer is returning a 503 error page, of course, without any CORS indication, and your browser is complaining due to the fact your API doesn't receive a valid response.
Please, review the status of the staging environment: once healthy, it probably will offer a proper response to your application.

Unfortunately the issue was not CORS at all; rather the server image was failing to build and compile due to ESNext JS syntax (server < ESNext)
I found the logstream API which allowed me to view all errors, for posterity you might be able to find the same; https://<webapp>.scm.azurewebsites.net/api/logstream

Related

Varnish not making origin call for infrequently requested cache

I'm noticing this behavior on Varnish 6.5 where it's not making backend calls per the max-age cache control origin response, if the request is not frequently requested by clients.
Below's the expected behavior I see for a cache requested every 1 second. It has 20 seconds max-age cache-control header from origin:
Request 1:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:02 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1183681 1512819
age: 17
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 2:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:04 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 891620 1512819
age: 19
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 3:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:05 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1183687 1512819
age: 20
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 4:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:06 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 854039 1183688
age: 1
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
You can see the Request #4 above makes a new origin request with the cache request id being 1183688.
Now if I wait a long while and make that same request, the cache age is pretty old and varnish does not make an origin request to cache a fresh object:
Request 5 after a while:
HTTP/2 200
date: Tue, 20 Jul 2021 02:10:08 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1512998 1183688
age: 482
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
I suppose I could start adding the Expires header from origin, but looking for explanation why varnish behaves this way if the request is infrequent. Thanks.
TTL header precedence in Varnish
Varnish does check the max-age directive, but there might be other factors can cause the TTL to be an unexpected value.
Here's the TTL precedence:
The Cache-Control header's s-maxage directive is checked.
When there's no s-maxage, Varnish will look for max-age to set its TTL.
When there's no Cache-Control header being returned, Varnish will use the Expires header to set its TTL.
When none of the above apply, Varnish will use the default_ttl runtime parameter as the TTL value. Its default value is 120 seconds.
Only then will Varnish enter vcl_backend_response, letting you change the TTL.
Any TTL being set in VCL using set beresp.ttl will get the upper hand, regardless of any other value being set via response headers.
Your specific situation
The best way to figure out what's going on is by running varnishlog and adding a filter for the URL you want to track.
Here's an example for the homepage:
varnishlog -g request -q "ReqUrl eq '/'"
The output will be extremely verbose, but will contain all the info you need.
Tags that are of particular interest are:
TTL see https://varnish-cache.org/docs/6.5/reference/vsl.html#varnish-shared-memory-logging
BerespHeader (specifically the Cache-Control backend response header)
RespHeader (specifically the Cache-Control response header)
Please also have a look at your VCL and check whether or not the TTL is changed by set beresp.ttl =.
What do I need to help you
In summary, if you want further assistance, please provide your full VCL, as well as a varnishlog extract for the transactions that is giving you to unexpected behavior.
Based on that information, we'll have a pretty good idea what's going on.

Gmail API throwing 401 Unauthorized

I'm trying to integrate Gmail API in my app in order to send emails to users.
i've created a project on Google Developers Console, enabled Gamil API in it, downloaded the credentials as JSON and followed the instruction provided at https://developers.google.com/gmail/api/quickstart/nodejs
I've also added https://developers.google.com/oauthplayground as a redirect URI for the project.
When I run the code, I get redirected to a consent screen. I choose the account which has the project, then get redirected to "oauthplayground"
However, when I try to exchange authorization code for tokens, I receive 401 unauthorized
the full response is:
HTTP/1.1 401 Unauthorized
Content-length: 75
X-xss-protection: 0
X-content-type-options: nosniff
Transfer-encoding: chunked
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Vary: Origin, X-Origin, Referer
Server: scaffolding on HTTPServer2
-content-encoding: gzip
Pragma: no-cache
Cache-control: no-cache, no-store, max-age=0, must-revalidate
Date: Wed, 21 Oct 2020 07:53:18 GMT
X-frame-options: SAMEORIGIN
Alt-svc: h3-Q050=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-T050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Content-type: application/json; charset=utf-8
{
"error_description": "Unauthorized",
"error": "unauthorized_client"
}
Any help is appreciated
Thanks

Setting cookies cross-domain? node and react

I am building a node express rest app with react frontend. At development the backend is running at localhost:5000 and the frontend at localhost:3000. I am using session based authentication system. So I am sending a Set-Cookie header when authentication is successful from the backend. But the problem is that since the frontend and the backend area on different domains the cookie cannot be set.
What can be done?
Plus, I do not want to use the JWT for authentication for the reasons laid out at http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-for-sessions/
Middleware
const cors = require('cors');
app.use(cors({ credentials: true, origin: 'http://localhost:3000' }));
Following is the header sent back from the backend
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 129
Content-Type: application/json; charset=utf-8
Date: Sun, 22 Dec 2019 14:24:03 GMT
ETag: W/"81-RZ35EekMxvCHWWNZ8hxPVFlS+R8"
Set-Cookie: connect.sid=s%3Afa5CxZSLQznDHuO7I6y9qAfy5-VuezUj.I%2F3BP6vfXybkyUXej6%2Fjt5ribqmmfoy1NQfSImuNYaU; Path=/; Expires=Sun, 29 Dec 2019 14:24:03 GMT; HttpOnly
Strict-Transport-Security: max-age=15552000; includeSubDomains
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block

IBM Cloud GitLab + Slack Integration: HTTP Status 400 missing_text_or_fallback_or_attachments

I am trying to connect my GitLab repository from IBM Cloud to our slack channel. I get a http status error code 400: missing_text_or_fallback_or_attachments
My response header looks like this
Content-Type: text/html
Transfer-Encoding: chunked
Connection: close
Date: Wed, 20 Feb 2019 09:14:21 GMT
Server: Apache
Vary: Accept-Encoding
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Referrer-Policy: no-referrer
X-Frame-Options: SAMEORIGIN
Access-Control-Allow-Origin: *
X-Via: haproxy-www-v06s
X-Cache: Error from cloudfront
Via: 1.1 fb8e6daa39bc4124e46750734008822c.cloudfront.net (CloudFront)
X-Amz-Cf-Id: Mv3PJD_D63jNuvA4YldBtHcMNGP-1fofXQ-BxgOmBy7eqPgkjpfOKg==
The integration settings looks like this
Are you creating a new webhook in GitLab or using the Slack Notifications integration? Only Slack Notifications are supported, some people had similar issues here: https://gitlab.com/gitlab-org/gitlab-ce/issues/41853#note_66355191

CDN - Serve different content-type based on Accept header (Verizon/EdgeCast Premium)?

I have a server which returns a different response based on the Accept header e.g. if Accept header includes "image/webp", a webp image is served, otherwise a jpg is served.
We run Varnish at server-level and it does this correctly, as per example below:
Request (with image/webp in Accept header):
curl -s -D - -o /dev/null "https://REDACTED/media/tokinoha_bowl-4.jpg?sh=2&fmt=webp,jpg" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"
Response (webp image served):
HTTP/2 200
date: Wed, 06 Feb 2019 08:25:05 GMT
content-type: image/webp
access-control-allow-origin: *
cache-control: public, s-maxage=31536000, max-age=31536000
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
strict-transport-security: max-age=31536000; includeSubDomains
vary: Accept-Encoding, Accept-Encoding,Origin
referrer-policy: strict-origin-when-cross-origin
accept-ranges: bytes
content-length: 60028
Request (no webp in Accept header, jpg served):
curl -s -D - -o /dev/null "https://REDACTED/media/tokinoha_bowl-4.jpg?sh=2&fmt=webp,jpg" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/apng,*/*;q=0.8"
Response:
HTTP/2 200
date: Wed, 06 Feb 2019 08:25:18 GMT
content-type: image/jpeg
access-control-allow-origin: *
cache-control: public, s-maxage=31536000, max-age=31536000
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
strict-transport-security: max-age=31536000; includeSubDomains
vary: Accept-Encoding, Accept-Encoding,Origin
referrer-policy: strict-origin-when-cross-origin
accept-ranges: bytes
content-length: 166991
We have the below options in the Rules Engine set up, however whichever content-type is cached first is served on all subsequent requests irrespective of request header.
Rules Engine settings
Does anyone know of a way to achieve this?
Thanks in advance!
We had the same problem with Verizon/Edgecast: One URL delivered two different image types (JPEG and WebP) depending on Accept header. The origin (imgix) sent correctly Vary: Accept, but Edgecast ignored that and cached what it get and so browsers without WebP support got sometimes the wrong format.
We solved it with a rule in Edgecast:
WebP rule
The query parameter auto is always part of the URL and can therefore always be removed from the cache key. With the second query parameter varyWebP we recognize the URLs definitely and prevent a collision with URLs without query parameter auto.
In this case the URL
https://[HOST]/[PATH]?a=1&b=2&c=3&auto=compress,format
creates the same cache key as:
https://[HOST]/[PATH]?a=1&b=2&c=3
That's why the query parameter varyWebP protects us.

Resources