Nodejs transform image data back to actual image - node.js

My server receives a file from a HTTP request and uploads this file to IBM Cloud Object Storage.
Moreover, the server allows to recover this file. Recovery is triggered by a get http request that should return said file.
It works fine for "basic" data format, such as text files. However, I encounter problems with more complex types such as images and the "reformating".
Image is uploaded to the datastore. The element stored is the buffer itself:
req.files[0].buffer
When getting the image back from the datastore, how can I transform it back to a readable format for my computer?
The data look like this and it is, on the server, a string:

If you are using ExpressJS you can do this:
const data = req.files[0].buffer;
res.contentType('image/jpeg'); // don't know what type is
res.send(data);

Related

AWS lambda function proxies requests of fetching binary blob(PDF) from service layer and then returns to the client

I've created a lambda function so that I can use it for validation purposes and then proxy the request to the service layer. Then the service layer response contains a binary blob(PDF), which goes through the lambda function then the API gateway finally would reach the client.
The first problem we ran into was the PDF got transformed or corrupted, just returned blank PDF. And then I found this post which did not make any sense to me at first. Until I saw this aws doc. It turns out it's required to encode the binary data into base64 and then put the indictor 'isBase64Encoded' to true. The gateway eventually converts the response back to the binary blob.
TBH, I am new to aws and I don't really understand why this is the way..what's wrong of passing through the original binary blob, why those conversion steps are necessary?
Here are list of things i had to do
Configured / as a Binary Media Type on gateway. (I tried to use application/pdf, but did not work?)
Make sure the response body from the service layer not transformed into string (I am using request, and by default it gives me string). I send encoding: null along with the request
When i get the Buffer data from the service layer, i use Buffer to convert response body into base64 encoding.
In the lambda output, I set isBase64Encoded to true
Finally, get the unaltered PDF...
I am wondering if someone can confirm i am doing in an expected way? Or maybe if there is a better way?
Also, when we set binary support media type to /, doesn't this mean it accepts all media types? But i only want the PDF to be supported.
This doc (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html) should be able to answer your question. And there are two things you need to note:
You can pass the original binary file (blob) as well as a base64-encoded binary file through API Gateway.
Ref: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-content-encodings-examples-image-lambda.html
*/* (or /) works in your case, but it means the API Gateway will treat all payload as binary data and this breaks payload with text data, for example JSON payload. So, ideally application/pdf should be used as the "Binary Media Type".

apollo graphql query an uploaded file

Apollo Server 2.0 has the ability to receive file uploads as described in this blog post.
However, all the tutorials and blog posts I found only showed how to upload a file. Nobody demonstrated how to actually retrieve the file back to display it onscreen.
Does anybody know how to properly query the file contents for display onscreen?
Also, there's the possibility that maybe there is no way of querying a file and you have to build a separate rest endpoint to retrieve the contents?
Some thoughts:
I imagine the query to be something like
query {
fetchImage(id: 'someid')
}
with the respective server-side definition
type Query {
fetchImage(id : ID!): Upload //maybe also a custom type, but how do I include the actual file contents?
}
Hint: Upload is a scalar type that apollo-server automatically adds to your type definition. It is used for the upload so I imaging it also being usable for the download/query. Please read the blog post mentioned above for more information.
The response from a GraphQL service is always serialized as a JSON object. Technically, a format other than JSON could be used in serialization but in practice only JSON is used because it meets the serialization requirements in the spec. So, the only way to send a file through GraphQL would be to convert the file into some format that's JSON-compatible. For example, you could convert a Buffer to a byte array and send that as an array of integers. You would also have to send the appropriate mime type. It would be up to the client to convert the byte array back into a usable format on receiving the response.
If you go this route, you'd have to use your own scalar or object type -- the Upload scalar does not support serialization so it will throw if you try to use it as an output type (and it's not really suitable for this sort of thing anyway).
However, while doing this is technically possible, it's also inadvisable. Serializing a larger file could cause you to run out of memory since there's no way to stream data through GraphQL (the entire response has to be in memory before it can be sent). It's much better to serve the file statically (ideally using nginx instead of Node). If your API needs to refer to the file, it can then just return the file's path.
You can do this by installing express with apollo server.
apollo-server-express
Install above package and instantiate Express object with Apollo Server as explained in package docs.
Then set the static folder using express like this
app.use("/uploads", express.static("uploads")); //Server Static files over Http
uploads is my static folder & /uploads will server get request to that path
//Now I can access static files like this
http://localhost:4000/uploads/test.jpg

problems uploading xslx file in body of post request to .net core app on aws-lambda

I'm trying to send a post request with postman to our AWS-Lambda server. Let me first state that, when running the web-server on my laptop using the Visual studio debugger, everything works fine. When trying to do exactly the same but to the url of the AWS-Lambda i'm getting the following errors when shifting through the logging:
when uploading the normal xlsx file (it's a size of 593kb)
Split or spanned archives are not supported.
When uploading the same file but with a few worksheet removed (because i thought maybe the size is to big, which should be bs but lets try):
Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.
when uploading a random xlsx file:
Offset to Central Directory cannot be held in an Int64.
I do not know what is going on, it might have something to do with the way postman serializes the xlsx file and the way my debug session (on a windows machine) deserializes it which is different from the way AWS-Lambda deserializes it but that's just a complete guess.
I always get a 400 - Bad Request response
I'm at a loss and am hoping someone here knows what to do.
This is the method in my controller, however the problem occurs before this:
[HttpPost("productmodel")]
public async Task<IActionResult> SeedProductModel()
{
try
{
_logger.LogInformation("Starting seed product model");
var memoryStream = new MemoryStream();
_logger.LogInformation($"request body: {Request.Body}");
Request.Body.CopyTo(memoryStream);
var command = new SeedProductModelCommand(memoryStream);
var result = await _mediator.Send(command);
if (!result.Success)
{
return BadRequest(result.MissingProducts);
}
return Ok();
}
catch (Exception ex)
{
_logger.LogError(ex.Message);
return BadRequest();
}
}
postman:
we do not use api keys for our test environment
Since you are uploading binary content to API Gateway, you need to enable it through the console.
Go to API Gateway -> select your API -> Settings -> Binary Media Types -> application/octet-stream, like the image below
Save it and make sure to redeploy your API, otherwise your changes will have no effect.
To do so, select your API -> Actions -> Deploy API

Read content from Azure blob storage in node API

I am new to azure and working on the storage account for one my application.Basically I have json files stored in azure blob storage.
I want to read the data from these files in Node JS application and do some filtering on the data, which is eventually secured REST end point to view data in the UI/Client as HTTP response.
I have gone through the docs about different operations on the blob storage which is exposed as NODE SDK, we can see find them in below link,
https://github.com/Azure/azure-storage-node
But the question I have is "How to read the json files". I see one method getBlobToStream. Is this going to give me json content in the callback, so that I can do further processing on the data and send as response to clients who requested.
Please some one explain how to do this in better way or is this the only option we have.
Thanks for the help.
To use getBlobToStream, you have to define a writable stream.
So I recommend you to use getBlobToText to avoid trouble.
If no error occurs, this method will get blob content into text in callback. You can then parse it to a JSON string. A simple example is as below.
blobService.getBlobToText(container, blobname, function(error, text){
if(error){
console.error(error);
res.status(500).send('Fail to download blob');
} else {
var data = JSON.parse(text);
res.status(200).send('Filtered Data you want to send back');
}
});

How to create a blob in node (server side) from a stream, a file or a base64 string?

I am trying to create a blob from a pdf I am creating from pdfmake so that I can send it to a remote api that only handles blobs.
This is how I get my PDF file:
var docDefinition = { content: 'This is an sample PDF printed with pdfMake' };
pdfDoc.pipe(fs.createWriteStream('./pdfs/test.pdf'));
pdfDoc.end();
The above lines of code do produce a readable pdf.
Now how can I get a blob from there? I have tried many options (creating the blob from the stream with the blob-stream module, creating from the file with fs, creating it from a base64 string with b64toBlob) but all of them require at some point to use the constructor Blob for which I always get an error even if I require the module blob:
TypeError: Blob is not a constructor
After some research I found that it seems that the Blob constructor is only supported client-side.
All the npm packages that I have found and which seem to deal with this issue seem to only work client-side: blob-stream, blob, blob-util, b64toBlob, etc.
So, how can I create a blob server-side on Node?
I don't understand why almost nobody also needs to create a blob server-side? The only thread I could find on the subject is this one.
According to that thread, apparently:
The Solution to this problem is to create a function which can convert between Array Buffers and Node Buffers. :)
Unfortunately this does not help me much as I clearly seem to lack some important knowledge here to be able to comprehend this.
use node-blob npm package
const Blob = require('node-blob');
let myBlob = new Blob(["something"], { type: 'text/plain' });

Resources