fineuploader server side renaming the file before the put method - azure

Just starting to test the FineUploader and I wonder:
When FineUploader uploading files directly to a blob container on azure,
I see the files (guid name instead of the original).
Is there any option to set on the server side the file name and the full path to save the file ?

Yes, you can retrieve the name for any file before it is uploaded from your server via an ajax call and supply it to Fine Uploader Azure by making use of the fact that the blobProperties.name option allows for a promissory return value. For example:
new qq.azure.FineUploader({
blobProperties: {
name: function(fileId) {
return new Promise(function(resolve) {
// retrieve file name for this file from your server...
resolve(filenameFromServer)
})
}
},
// all other options here...
})
The above option will be called by Fine Uploader Azure once per file, just before the first request is sent. This is true of chunked and non-chunked uploads. The value passed into resolve will be used as the new file name for the associated file.

Related

Trying to use HttpClient.GetStreamAsync straight to the adls FileClient.UploadAsync

I have an Azure Function that will call an external API via HttpClient. The external API returns a JSON response. I want to save the response directly to an ADLS File.
My simplistic code is:
public async Task UploadFileBulk(Stream contentToUpload)
{
await this._theClient.FileClient.UploadAsync(contentToUpload);
}
The this._theClient is a simple wrapper class around the various Azure Data Lake classes such as DataLakeServiceClient, DataLakeFileSystemClient, DataLakeDirectoryClient, DataLakeFileClient.
I'm happy this wrapper calls works as I expect, I spin one up, set the service, filesystem, directory and then a filename to create. I've used this wrapper class to create directories etc. so it works as I expect.
I am calling the above method as follows:
await dlw.UploadFileBulk(await this._httpClient.GetStreamAsync("<endpoint>"));
I see the file getting created in the Lake directory with the name I want, however if I then download the file using Sorage Explorer and then try to open it in say VS Code it's not in a recognisable format (I can "force" code to open it but it looks like binary format to me).
If I sniff the traffic with fiddler I can see the content from the external API is JSON, content-type is application/json and the body shows in fiddler as JSON.
If I look at the calls to the ADLS endpoint I can see a PUT call followed by two PATCH calls.
The first PATCH call looks like it is the one sending the content, it has a content-header of application/octet-stream and the request body is the "binary looking content".
I am using HttpClient.GetStreamAsync as I don't want my Function to have to load the entire API payload into memory (some of the external API endpoints return very large files over 100mb). I am thinking I can "stream the response from the external API straight into ADLS".
Is there a way to change how the ADLS FileClient.UploadAsync(Stream stream) method works so I can tell it to upload the file as a JSON file with a content type of application/json?
EDIT:
So turns out the External API was sendng back zipped content and so once I added the following extra AutomaticDecompression code to my functions startup I got the files uploaded to ADLS as expected.
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient("default", client =>
{
client.DefaultRequestHeaders.Add("Accept-Encoding", "gzip, deflate");
}).ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
});
}
#Gaurav Mantri has given me some pointers on if the pattern of "streaming from an output to an input" is actually correct, I will research this further.
Regarding the issue, please refer to the following code
var uploadOptions = new DataLakeFileUploadOptions();
uploadOptions.HttpHeaders = new PathHttpHeaders();
uploadOptions.HttpHeaders.ContentType ="application/json";
await fileClient.UploadAsync(stream, uploadOptions);

How can I check if a file has finished uploading before moving it with the Google Drive API v3?

I'm writing a small archiving script (in node.js) to move files on my Google Drive to a predetermined folder if they contain .archive.7z in the filename. The script is run periodically as a cron job, and the file movement has not caused any issues, but files still in the process of being uploaded by my desktop client are moved before they're finished. This terminates the upload and results in corrupted files in the destination folder.
Files still being uploaded from my desktop to Google Drive are returned by the following function anyway:
async function getArchivedFiles (drive) {
const res = await drive.files.list({
q: "name contains '.archive.7z'",
fields: 'files(id, name, parents)',
})
return res.data.files
}
Once the files are moved and renamed with the following code, the upload terminates from my client (Insync) and the destination files are ruined.
drive.files.update({
fileId: file.id,
addParents: folderId,
removeParents: previousParents,
fields: 'id, parents',
requestBody: {
name: renameFile(file.name)
}
})
Is there any way to check if a file is still being uploaded before moving it?
It turns out that a tiny placeholder-type file is being created on uploads. I'm not sure if this is a Google Drive API behaviour or something unique to the Insync desktop client. This file seems to upload separately and thus can be freely renamed once it's complete.
I worked around this problem by including the file's md5 hash in the filename, and updating my script to only move files when the hash in their filename matches the md5Checksum retrieved from the Google Drive API.

problems uploading xslx file in body of post request to .net core app on aws-lambda

I'm trying to send a post request with postman to our AWS-Lambda server. Let me first state that, when running the web-server on my laptop using the Visual studio debugger, everything works fine. When trying to do exactly the same but to the url of the AWS-Lambda i'm getting the following errors when shifting through the logging:
when uploading the normal xlsx file (it's a size of 593kb)
Split or spanned archives are not supported.
When uploading the same file but with a few worksheet removed (because i thought maybe the size is to big, which should be bs but lets try):
Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.
when uploading a random xlsx file:
Offset to Central Directory cannot be held in an Int64.
I do not know what is going on, it might have something to do with the way postman serializes the xlsx file and the way my debug session (on a windows machine) deserializes it which is different from the way AWS-Lambda deserializes it but that's just a complete guess.
I always get a 400 - Bad Request response
I'm at a loss and am hoping someone here knows what to do.
This is the method in my controller, however the problem occurs before this:
[HttpPost("productmodel")]
public async Task<IActionResult> SeedProductModel()
{
try
{
_logger.LogInformation("Starting seed product model");
var memoryStream = new MemoryStream();
_logger.LogInformation($"request body: {Request.Body}");
Request.Body.CopyTo(memoryStream);
var command = new SeedProductModelCommand(memoryStream);
var result = await _mediator.Send(command);
if (!result.Success)
{
return BadRequest(result.MissingProducts);
}
return Ok();
}
catch (Exception ex)
{
_logger.LogError(ex.Message);
return BadRequest();
}
}
postman:
we do not use api keys for our test environment
Since you are uploading binary content to API Gateway, you need to enable it through the console.
Go to API Gateway -> select your API -> Settings -> Binary Media Types -> application/octet-stream, like the image below
Save it and make sure to redeploy your API, otherwise your changes will have no effect.
To do so, select your API -> Actions -> Deploy API

Node.js: multi-part file upload via REST API

I would like to upload invoking a REST endpoint in multi-part.
In particular, I am looking at this API: Google Cloud Storage: Objects: insert
I did read about using multer, however I did not find any complete example showing me how to perform this operation.
Could someone help me with that?
https://cloud.google.com/nodejs/getting-started/using-cloud-storage#uploading_to_cloud_storage
^^ this is a a good example of how to use multer to upload a single image to Google Cloud Storage. Use multer to create filestream for each file ( storage: multer.memoryStorage() ), and handle the file stream by sending it to your GCS bucket in your callback.
However link only shows an example for one image. If you want to do an array of images, create a for-loop, where you create a stream for each file in your request, but only put the next() function after the for loop ends. If you keep the next(); in each loop cycle you will get the error: Error: Can't set headers after they are sent.
There is an example for uploading files with the nodejs client library and multer. You can modify this example and set the multipart option:
Download the sample code and cd into the folder:
git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples/
cd nodejs-docs-samples/appengine/storage
Edit the app.yaml file and include your bucket name:
GCLOUD_STORAGE_BUCKET: YOUR_BUCKET_NAME
Then in the source code, you can modify the publicUrl variable according to Objects: insert example:
const publicUrl = format(`https://www.googleapis.com/upload/storage/v1/b/${bucket.name}/o?uploadType=multipart`);
Download a key file for your service account and set the environment variable:
Go to the Create service account key page in the GCP Console.
From the Service account drop-down list, select New service account.
Input a name into the Service account name field.
From the Role drop-down list, select Project > Owner.
Click Create. A JSON file that contains your key downloads to your computer. And finally export the environment variable:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/key/file
After that, yo're ready to run npm start and go to the app's frontend and upload your file:

Content Type not being set on Azure File Store in Node JS

I'm testing the functionality of uploading a file to Azure File Storage with this github sample: Node Getting Started
I modified line 111 to include an option for the contentSettings:
var options = { contentSettings: { contentType: 'application/pdf'}};
fileService.createFileFromLocalFile(shareName, directoryName, fileName, imageToUpload, options, function (error) {
if (error) {
callback(error);
} else {
... and whether I upload a PDF with contentType of 'application/pdf' or an image with 'image/png', the file content type is not set once it's posted to Azure storage.
When I copy the URL to the file in my website, the error comes back saying the content type is incorrect.
What am I doing wrong? How do I set the content types of the uploaded files to make them work in my website?
what's the version of azure-storage package you are using? I tried the code you pasted and the content type is set successfully to Azure Storage (latest version).
After uploading successfully, try to call getFileProperties and you can see the properties stored on the Azure Storage server side.
And my very clear about the scenario of "copying the URL to the file in my website" and the error.

Resources