Azure Web App returns binary file 'modified' with incorrect size - azure

I have an Azure web app that returns a binary file using FileStreamResult. Works fine...
FileStreamResult fsr = File(blobStream, System.Net.Mime.MediaTypeNames.Application.Octet, "testfile.bin");
return fsr;
I copied this code to another web app, returning the exact same blob item, and it returns a 'corrupted' file that is almost 2x larger (65K vs 117K). Both apps are using the same version of .Net, the same Azure account, the same Azure storage account, the response headers are the same...but something must be different!
Update: The FileStreamResult has the correct data/size in the buffer. If I copy the data out of the FileStream (before returning), it's OK. So the FileStreamResult is correct; the problem happens when the response is generated. Something with how FileStreamResult is converted. I've also tried ActionResult, same problem.
Update 2: Still can't figure it out but...it's swapping out the non-UTF-8 chars with the EF BF BD unicode replacement characters. I'm not sure why. I thought the content type of application/octet-stream would indicate the data is not interpreted. Also can't find why one web app does this but the other does not.

You could update the package Swashbuckle.AspNetCore to the latest version 4.0.1.
Here is a similar issue you could refer to.

Finally success...turns out there was an ActionFilterAttribute on the base controller that was forcing the encoding.

Related

How to Download a File (from URL) in Typescript

Update: This question used to ask about Google Cloud Storage, but I have since realized the issue actually is reproducable merely trying to save the download to local disk. Thus, I am rephrasing the question to be entirely about file downloads in Typescript and to no longer mention Google Cloud Storage.
When attempting to download and save a file in Typescript with WebRequests (though I experienced the same issue with requests and request-promises), all the code seems to execute correctly, but the resultant file is corrupted and cannot be viewed. For example, if I download an image, the file is not viewable in any applications.
// Seems to work correctly
const download = await WebRequest.get(imageUrl);
// `Buffer.from()` also takes an `encoding` parameter, but it's unclear how to determine the encoding of a download
const imageBuffer = Buffer.from(download.content);
// I *think* this line is straightforward
const imageByteArray = new Uint8Array(imageBuffer);
// Saves a corrupted file
const file = fs.writeFileSync("/path/to/file.png", imageByteArray);
I suspect the issue lies within the Buffer.from call not correctly interpreting the downloaded content, but I'm not sure how to do it right. Any help would be greatly appreciated.
Thanks so much!
From what I saw in the examples for web-request, download.content is just a string. If you want to upload a string to Cloud Storage using the node SDK, you can use File.save, passing that string directly.
Alternatively, you could use one the solutions seen here.

App crashes when storage().putFile is called with assets-library uri on iOS

I am trying to upload an image from the camera roll of the phone to Firebase Storage using react-native-firebase. However, when I pass the uri of the image to the putFile method, the app crashes without an error.
For example:
const uri = 'assets-library://asset/asset.JPG?id=3DEE5FA3-9E58-479B-9AD9-A7FDBEDF0502&ext=JPG';
firebase.storage().ref('test.jpeg').putFile(uri)
.then(...)
.catch(...);
Is this the expected behavior? If yes, how can I transform the assets uri to a full file path?
This was a bug that was fixed in the RNFirebase release v4.3.0.
If anybody is getting this in the future, it may help to note that you can't upload to the root reference (e.g. .ref()).
Instead you need something like .ref('images/xyz'), etc.

What is the Azure API version

I'm trying to access the result of a GET request provided by Azure, as shown in the example : https://msdn.microsoft.com/sv-se/library/azure/dn820159.aspx
My problem is that the api-version is a mandatory argument, but I have no idea about what to write inside. I'm a bit lost with the Azure Batch documentation, it doesn't seem to be complete.
I found something in an Azure webpage : https://azure.microsoft.com/en-us/documentation/articles/search-api-versions/ and the api-version was api-version=2015-02-28. However, if I try it in my browser, I have this answer : "key":"Reason","value":"The specified api version string is invalid".
Any idea of what I can put inside the api-version parameter ?
Have a look here
As the time of this writing
The version of the Batch API described here is '2016-07-01.3.1', and
using that version is recommended where possible.
Earlier versions include '2016-02-01.3.0', '2015-12-01.2.1',
'2015-11-01.2.1', '2015-06-01.2.0', '2015-03-01.1.1', and
'2014-10-01.1.0'.
So try specifying '2016-07-01.3.1'

Azure BlockBlob Upload From Stream always has content length 0

When I call "await blockBlob.UploadFromStreamAsync(stream);", I find that stream has a content length of some value(500+ bytes).
But once the call is made, I see that there is no change in the blockblob.
I check it in fiddler and see the call made has a Entity content-length =0.
Would appreciate if someone could please guide how to debug a problem like this.
Thanks
The question was .. how to debug this?
I had same issue, found this helpful doc to get tracing enabled:
Client-side Logging with the .NET Storage Client Library
My issue was when uploading using UploadFromStreamAsync azure file storage while the destination resource already existed (an overwrite action), the fileStream.Seek(0, SeekOrigin.Begin) was needed. When uploading (and creating) new resources, the seek was not required.
Why are you converting the bytearray, instead of uploading it directly?
Like this:
blob.UploadFromByteArray(bytearray, 0, bytearray.Count());

Upload File with brackets ([ & ]) in the name

I'm moving a ClickOnce install from a regular web server to Azure Blob storage and have a problem with some of the files. The filenames contains [ ] and CloudBlob.UploadFile fails with an exception:
Microsoft.WindowsAzure.Storageclient.StorageException:
Error accessing blob storage: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
The code has been used for a while and only fails on files with [ ] in the name so I don't believe that it is an "authentication failure". In this particular case, this is the seventh file being uploaded in a loop. I found this link on MSDN about valid file names and this on stack overflow which both show problems with square brackets in URL's and reference UrlEncode. I added in a call to UrlEncode and that did not help. The container is created with public access since we use it to support customer downloads of our software. We have been hosting a "test" install in another container and have not had permission problems accessing that either.
I can upload the file with no name changes and then rename the file to add the "path" using newdesic's Azure Storage Explorer tool so what is that doing that I am not doing?
I see you're using the 1.7 SDK. This is a small encoding issue with the SDK which is also present in v2.0. Let's see what happens.
No encoding
account.CreateCloudBlobClient()
.GetContainerReference("temp")
.GetBlobReference("abc[]def.txt")
.UploadFile("myfile.txt");
If you don't encode the blob name, you'll end up with a request to the following URL which is causing the authentication exception:
http://account.blob.core.windows.net/temp/abc[]def.txt
The is because the SDK uses Uri.EscapeUriString internally to encode your string, but this doesn't take into account square brackets.
Encoding
Then you would expect the following to do the trick:
account.CreateCloudBlobClient()
.GetContainerReference("temp")
.GetBlobReference(HttpUtility.UrlEncode("abc[]def.txt"))
.UploadFile("myfile.txt");
The issue here is that you'll end up with this url:
http://account.blob.core.windows.net/temp/abc%255b%255ddef.txt
So what's happening here? Calling HttpUtility.UrlEncode turns abc[]def.txt to abc%5B%5Ddef.txt, which is correct. But internally, the SDK will encode this string again which results in abc%255b%255ddef.txt, which isn't what you want.
Workaround
The only way to apply encoding which takes square brackets into accounts is by using a small workaround. If you pass the full URL to the GetBlobReference method, the SDK assumes you did all the encoding yourself:
var container = account.CreateCloudBlobClient().GetContainerReference("temp");
var blob = container.GetBlobReference(String.Format("{0}/{1}",
container.Uri, System.Web.HttpUtility.UrlEncode("abc[]def.txt")));
blob.UploadFile("myfile.txt");
This results in a correctly encoded URL:
http://account.blob.core.windows.net/temp/abc%5b%5ddef.txt
And if you use a tool like CloudXplorer, you'll see the blob with the correct filename:
There are two known breaks in the Uri class in .Net 4.5
• ‘[‘,’]’ characters are no longer escaped
• ‘\’ character is now escaped as %5C
This is causing an authentication when the server attempts to validate the signature of the request as the canonicalized string is now different on the client and the server.
There are a few workarounds clients can use while this issue is present. The correct solution will depend on your specific application and requirements.
Avoid the ‘[‘,’]’, or ‘\’ characters in resource names
By simply avoiding these characters all together you will be able to avoid the issue described above.
Target .Net 4.0
Currently the recommendation is for clients to simply continue to target their applications to .Net 4.0 while a full solution is being investigated. Note, since .Net 4.5 is an in place upgrade clients can still take advantage of some performance improvements in the GC etc, without specifically targeting the .Net 4.5 profile. For Windows RT developers this is not an option and you will therefore require the workarounds detailed below.
Pre-Escape Data
If possible a client can pre- escape the data or replace the given characters with non-affected ones.
This is why the workaround above is working.

Resources