I have an Azure Function that will call an external API via HttpClient. The external API returns a JSON response. I want to save the response directly to an ADLS File.
My simplistic code is:
public async Task UploadFileBulk(Stream contentToUpload)
{
await this._theClient.FileClient.UploadAsync(contentToUpload);
}
The this._theClient is a simple wrapper class around the various Azure Data Lake classes such as DataLakeServiceClient, DataLakeFileSystemClient, DataLakeDirectoryClient, DataLakeFileClient.
I'm happy this wrapper calls works as I expect, I spin one up, set the service, filesystem, directory and then a filename to create. I've used this wrapper class to create directories etc. so it works as I expect.
I am calling the above method as follows:
await dlw.UploadFileBulk(await this._httpClient.GetStreamAsync("<endpoint>"));
I see the file getting created in the Lake directory with the name I want, however if I then download the file using Sorage Explorer and then try to open it in say VS Code it's not in a recognisable format (I can "force" code to open it but it looks like binary format to me).
If I sniff the traffic with fiddler I can see the content from the external API is JSON, content-type is application/json and the body shows in fiddler as JSON.
If I look at the calls to the ADLS endpoint I can see a PUT call followed by two PATCH calls.
The first PATCH call looks like it is the one sending the content, it has a content-header of application/octet-stream and the request body is the "binary looking content".
I am using HttpClient.GetStreamAsync as I don't want my Function to have to load the entire API payload into memory (some of the external API endpoints return very large files over 100mb). I am thinking I can "stream the response from the external API straight into ADLS".
Is there a way to change how the ADLS FileClient.UploadAsync(Stream stream) method works so I can tell it to upload the file as a JSON file with a content type of application/json?
EDIT:
So turns out the External API was sendng back zipped content and so once I added the following extra AutomaticDecompression code to my functions startup I got the files uploaded to ADLS as expected.
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient("default", client =>
{
client.DefaultRequestHeaders.Add("Accept-Encoding", "gzip, deflate");
}).ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
});
}
#Gaurav Mantri has given me some pointers on if the pattern of "streaming from an output to an input" is actually correct, I will research this further.
Regarding the issue, please refer to the following code
var uploadOptions = new DataLakeFileUploadOptions();
uploadOptions.HttpHeaders = new PathHttpHeaders();
uploadOptions.HttpHeaders.ContentType ="application/json";
await fileClient.UploadAsync(stream, uploadOptions);
Related
I'm trying to send a post request with postman to our AWS-Lambda server. Let me first state that, when running the web-server on my laptop using the Visual studio debugger, everything works fine. When trying to do exactly the same but to the url of the AWS-Lambda i'm getting the following errors when shifting through the logging:
when uploading the normal xlsx file (it's a size of 593kb)
Split or spanned archives are not supported.
When uploading the same file but with a few worksheet removed (because i thought maybe the size is to big, which should be bs but lets try):
Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.
when uploading a random xlsx file:
Offset to Central Directory cannot be held in an Int64.
I do not know what is going on, it might have something to do with the way postman serializes the xlsx file and the way my debug session (on a windows machine) deserializes it which is different from the way AWS-Lambda deserializes it but that's just a complete guess.
I always get a 400 - Bad Request response
I'm at a loss and am hoping someone here knows what to do.
This is the method in my controller, however the problem occurs before this:
[HttpPost("productmodel")]
public async Task<IActionResult> SeedProductModel()
{
try
{
_logger.LogInformation("Starting seed product model");
var memoryStream = new MemoryStream();
_logger.LogInformation($"request body: {Request.Body}");
Request.Body.CopyTo(memoryStream);
var command = new SeedProductModelCommand(memoryStream);
var result = await _mediator.Send(command);
if (!result.Success)
{
return BadRequest(result.MissingProducts);
}
return Ok();
}
catch (Exception ex)
{
_logger.LogError(ex.Message);
return BadRequest();
}
}
postman:
we do not use api keys for our test environment
Since you are uploading binary content to API Gateway, you need to enable it through the console.
Go to API Gateway -> select your API -> Settings -> Binary Media Types -> application/octet-stream, like the image below
Save it and make sure to redeploy your API, otherwise your changes will have no effect.
To do so, select your API -> Actions -> Deploy API
I am new to azure and working on the storage account for one my application.Basically I have json files stored in azure blob storage.
I want to read the data from these files in Node JS application and do some filtering on the data, which is eventually secured REST end point to view data in the UI/Client as HTTP response.
I have gone through the docs about different operations on the blob storage which is exposed as NODE SDK, we can see find them in below link,
https://github.com/Azure/azure-storage-node
But the question I have is "How to read the json files". I see one method getBlobToStream. Is this going to give me json content in the callback, so that I can do further processing on the data and send as response to clients who requested.
Please some one explain how to do this in better way or is this the only option we have.
Thanks for the help.
To use getBlobToStream, you have to define a writable stream.
So I recommend you to use getBlobToText to avoid trouble.
If no error occurs, this method will get blob content into text in callback. You can then parse it to a JSON string. A simple example is as below.
blobService.getBlobToText(container, blobname, function(error, text){
if(error){
console.error(error);
res.status(500).send('Fail to download blob');
} else {
var data = JSON.parse(text);
res.status(200).send('Filtered Data you want to send back');
}
});
I am trying a sample app with the workflow
Wait for new file (csv) in dropbox folder
Load the file contents
Pass the file contents to an azure function to further process
I am getting stuck on how to pass the file contents to the azure function. I keep getting an unsupportedmediatype error with "Message": "The WebHook request must contain an entity body formatted as JSON
How do I get the output of the second stage into a function?
What I typically do in those scenario's is create a json-body for the Function and add the messagecontent I want to sent to the function as a Base64-string as a part of the json-body (eg. Payload, or Body).
Similar approach on how Logic Apps handles certain media types at runtime.
{"OriginalFileName" : "myfile.csv", "PayLoad" : "ContentBase64String"}
I am trying to create a blob from a pdf I am creating from pdfmake so that I can send it to a remote api that only handles blobs.
This is how I get my PDF file:
var docDefinition = { content: 'This is an sample PDF printed with pdfMake' };
pdfDoc.pipe(fs.createWriteStream('./pdfs/test.pdf'));
pdfDoc.end();
The above lines of code do produce a readable pdf.
Now how can I get a blob from there? I have tried many options (creating the blob from the stream with the blob-stream module, creating from the file with fs, creating it from a base64 string with b64toBlob) but all of them require at some point to use the constructor Blob for which I always get an error even if I require the module blob:
TypeError: Blob is not a constructor
After some research I found that it seems that the Blob constructor is only supported client-side.
All the npm packages that I have found and which seem to deal with this issue seem to only work client-side: blob-stream, blob, blob-util, b64toBlob, etc.
So, how can I create a blob server-side on Node?
I don't understand why almost nobody also needs to create a blob server-side? The only thread I could find on the subject is this one.
According to that thread, apparently:
The Solution to this problem is to create a function which can convert between Array Buffers and Node Buffers. :)
Unfortunately this does not help me much as I clearly seem to lack some important knowledge here to be able to comprehend this.
use node-blob npm package
const Blob = require('node-blob');
let myBlob = new Blob(["something"], { type: 'text/plain' });
I've created a file system abstraction, where I store files with a relative path, e.g /uploads/images/img1.jpg.
These can then be saved both on local file system (relative to folder), or Azure. Then, I can also ask a method to give me the url to access that relative path.
In Azure, currently this is being done similar to the below:
public string GetWebPathForRelativePathOnUserContentStorage(string relativeFileFullPath)
{
var container = getCloudBlobContainer();
CloudBlockBlob blob = container.GetBlockBlobReference(relativeFileFullPath);
return blob.Uri.ToString();
}
On a normal website, there might be say 40 images in one page - So this get's called like 40 times. Is this first of all slow? I've noticed there is a particular pattern in the generated URL:
https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
On a normal website, there might be say 40 images in one page - So
this get's called like 40 times. Is this first of all slow?
Not at all. The code you wrote above does not make any calls to storage. It just creates an instance of CloudBlockBlob object. If you were using GetBlockBlobReferenceFromServer method, then it would have been a different story because that method makes a call to storage.
`I've noticed there is a particular pattern in the generated URL:
_https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
Absolutely yes. Assuming you're using just standard stuff that would be perfectly fine. Non standard stuff would include things like using a custom domain for your blob storage or connecting to geo-secondary location of your storage account.