The following request succceses:
HEAD https://ascendxyzweutest.blob.core.windows.net/b89e6c6cdde0421996a7ba47fcb57184-workset?restype=container HTTP/1.1
User-Agent: WA-Storage/4.3.0 (.NET CLR 4.0.30319.0; Win32NT 6.2.9200.0)
x-ms-version: 2014-02-14
x-ms-client-request-id: b566c59d-b8ac-4b7e-9cfc-820337971cc9
x-ms-date: Thu, 05 Feb 2015 00:59:17 GMT
Authorization: SharedKey ascendxyzweutest:+KdHX5Bewm5uP4lPHUtEcCv79tC3dQK28evyg1trOlw=
Host: ascendxyzweutest.blob.core.windows.net
Connection: Keep-Alive
reply:
HTTP/1.1 404 The specified container does not exist.
Transfer-Encoding: chunked
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 2fa4a112-0001-0010-7b56-b4eb01000000
x-ms-version: 2014-02-14
Date: Thu, 05 Feb 2015 00:59:23 GMT
The above example is from teh .net Storage Library
Then I am with a WebRequest trying to do the same.
HEAD https://ascendxyzweutest.blob.core.windows.net/ccf2a083affa4e6c8d489fe1b2f0d32a-workset?restype=container HTTP/1.1
x-ms-version: 2014-02-14
x-ms-client-request-id: 92afdcaf-5afe-4f6a-914e-4850a4f0bd1d
x-ms-date: Thu, 05 Feb 2015 01:01:56 GMT
Authorization: SharedKey ascendxyzweutest:LRoIdLp0m4nR0XhRlcTT7gyyi6zYJhGg3fHmXKemPVc=
Host: ascendxyzweutest.blob.core.windows.net
Connection: Keep-Alive
reply:
HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Transfer-Encoding: chunked
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: 61b7318c-0001-001c-6cc4-b6edcd000000
Date: Thu, 05 Feb 2015 01:02:22 GMT
Every other post on this suggests that its the x-ms-date field that is off by more than 15min. I executed these two from teh same machine with 5min seperation and this is also what is seen in the request headers. So i dont believe that the time is off.
I am signing the request with the stuff form Azure Storage SDK.
public Task SignRequestAsync(HttpWebRequest request, string tenantid, string container)
{
var a = new SharedKeyAuthenticationHandler(SharedKeyCanonicalizer.Instance, account.Credentials, account.Credentials.AccountName);
a.SignRequest(request,null);
return Task.FromResult(0);
}
Any pointers on what can have gone wrong.
Gaurav Mantri assisted me on this and found the issue. The issue was due to some headers being set that got used when calculating the signature, but these headers was omitted when doing the reqest. Content-Length being set to 0 ect.
So the problem is with your “HEAD” request when you check if the container exists. Basically when calculating signature, you’re passing “Content-Length” (even though it’s value is 0) however it is not passed in as request headers. Thus your signature mismatches and you get 403 error. If I comment out the code where you set the “ContentLength” property of the request, your “HEAD” request succeeds (but then your PUT request would fail).
Related
I have this nagging error message on accessing an endpoint on azure portal.
Any one could help?
HTTP/1.1 401 Unauthorized
cache-control: none
content-length: 0
content-security-policy: script-src 'self'
date: Tue, 23 Nov 2021 16:47:15 GMT
expect-ct: max-age=604800,enforce
ocp-apim-apiid: cash-code
ocp-apim-operationid: generate-cashcode
ocp-apim-subscriptionid: master
For me this was simply a case of using the wrong "secret" i.e. I accidentally used the SecretID instead of the value of the secret.
That was allowing me to get a code without an error message, but the code was not actually valid even though it looked like a proper code, and all I got back was the infamous 401 without a clue as to why it was happening.
My app is being throttled by Azure Management API and it'd be much easier to investigate which requests are causing troubles if I have access to rate limiting headers. I thought they're returned by default but they're not, at least not for me. I've followed steps listed in this article but none of rate limiting headers is returned. I even upgraded Azure CLI to the latest version but it didn't help either.
So the question is whether I need to enable something in Azure portal to retrieve this kind of information or maybe it's something else. Anyway, I'd be very thankful for any help as I feel a bit confused.
Here's an example response:
GET https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourcegroups?api-version=2021-04-01
HTTP/2 200
cache-control: no-cache
pragma: no-cache
content-type: application/json; charset=utf-8
expires: -1
x-ms-request-id: d093ff31-9b06-41dd-9c3a-6e7b7275e02
x-ms-correlation-request-id: d093ff31-9b06-41dd-9c3a-6e7b7275e023
x-ms-routing-request-id: EASTUS2:20211024T140342Z:d093ff31-9b06-41dd-9c3a-6e7b7275e023
strict-transport-security: max-age=31536000; includeSubDomains
x-content-type-options: nosniff
date: Sun, 24 Oct 2021 14:03:42 GMT
content-length: 289
<JSON_DATA>
I have defined a rewrite rule on my Azure application gateway that is rewriting a response header (Server=Unknown). I see that the rule is correctly executed on a GET, OPTIONS, DELETE method (returning either HTTP 200 or 405), however the rule does not seem to be fired on a TRACE method.
I wanted to address a finding from penetration tests that state that the server discloses technical information allowing an attacker to be informed of the reverse proxy installed.
Below is a screenshot of the HTTP DELETE method:
HTTP/1.1 405
Date: Mon, 02 Nov 2020 14:47:18 GMT
Content-Type: text/plain
Content-Length: 0
Connection: keep-alive
X-FRAME-OPTIONS: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-cache,no-store,must-revalidate
Pragma: no-cache
Allow: GET
Server: Unknown
And below the same call using TRACE:
HTTP/1.1 405 Not Allowed
Server: Microsoft-Azure-Application-Gateway/v2
Date: Mon, 02 Nov 2020 14:47:50 GMT
Content-Type: text/html
Content-Length: 183
Connection: close
Also to me the fact that the TRACE does not contain as many headers as the DELETE is a proof that the call will not reach the web server (which is fine with me) but then I expect the application gateway to fire the same rewrite rule as for any other method.
I also tried to remove the header instead of setting it to "Unknown" but this has the same effect (the header is removed on all methods except TRACE).
Trace method is not yet added to the list. We have this in our road map but with no ETA. Please follow Azure Updates page for further updates.
I have JSON compression configured for my Web API in Azure, following this MSDN article Use AppCmd.exe to Configure IIS at Startup.
I publish my roles and start testing and all is well according to Fiddler.
Here is an example request header:
GET http://x.cloudapp.net:8080/api/xyz HTTP/1.1
Accept: application/json
Host: x.cloudapp.net:8080
Accept-Encoding: gzip
Here is an example response header:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
Expires: -1
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Thu, 18 Jul 2013 22:27:38 GMT
Content-Length: 2472
Just a few Web API calls later (like 6 seconds later) all responses are no longer compressed.
Request header:
GET http://xyz HTTP/1.1
Accept: application/json
Host: sp-test-server2012.cloudapp.net:8080
Accept-Encoding: gzip
Response header:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Thu, 18 Jul 2013 22:27:44 GMT
Content-Length: 16255
Note the missing Content-Encoding in the second response.
So I get a few hundred calls that are compressed, and then most of the rest are uncompressed. Every now and again I can see that another response is compressed. Or if I stop testing for a while and then resume it seems that compression starts again.
Is compression in IIS 8 'throttled' or something? Say, if the CPU is nearly maxed out, does IIS stop compressing?
In monitoring my WebRole in Azure, my CPU usage can go above 90% during my heavy load testing. It is hard to tell if this is correlated with the lack of compression on the results. Memory usage does not appear to be an issue at all.
I would like this to be more reliable and predictable!
Well, apparently yesterday my Google Fu failed me. I found the answer today and it is true that IIS will or will not dynamically compress content based on CPU usage. HTTP Compression
There are two settings that control dynamic compression. One specifies when it is disabled: dynamicCompressionDisableCpuUsage, default 90%. Another specifies when it is re-enabled dynamicCompressionEnableCpuUsage, default 50%.
The things you learn.
This article might be helpful to force compression:
ASP.NET Web API GZip compression ActionFilter with 8 lines of code
Of course they'll charge for CPU time in heavy load situations.
We have a Java process that fetches resources over HTTP. I discovered that one resource was not pulling correctly even after a client had modified it. Digging into it I found that on the server the process is running the Last-Modified date of the resource does not match what I see when I view info from a browser. I then tried fetching it from a different server and my laptop and both of those showed the correct date.
I've since patched the process to allow the option to Ignore the header date for cases when it exists (but will be incorrect) but I would really love to know why this happens.
For reference here is a curl response from the Server that returns the incorrect info.
HTTP/1.1 200 OK
Server: Sun-ONE-Web-Server/6.1
Date: Fri, 23 Sep 2011 14:16:57 GMT
Content-length: 132
Content-type: text/plain
Last-modified: Wed, 15 Sep 2010 21:58:20 GMT
Etag: "84-4c91417c"
Accept-ranges: bytes
And then the same request on a different server (also get the same results on my machine)
HTTP/1.1 200 OK
Server: Sun-ONE-Web-Server/6.1
Date: Fri, 23 Sep 2011 14:18:47 GMT
Content-length: 132
Content-type: text/plain
Last-modified: Fri, 23 Sep 2011 01:20:43 GMT
Etag: "84-4e7bdeeb"
Accept-ranges: bytes
Both servers are running on Fedora 10.
Can anyone shed some light on this for me and how I might be able to fix this long term?
So you have 2 servers and both return different results, i.e. inconsistency problem (I can basically see this from Etag header)?
My first guess would be caching. Are the any caches active? Maybe the invaliation of cache doesn't work correctly or ttl (time-to-live) settings are too long.
As test have a try and fresh restart machine with stale data and see whether results change (usually during restart of systems most of the simple cache-setups are flushed).
Which kind of backend does the resource come from initially (database, file-system, 3rd party-service)?