How to reupload image uploaded to server - Servant Haskell? - haskell

I have an endpoint that allows users to upload an image to the server. I used Servant.Multipart MultipartFormData Tmp to achieve this. I need to reupload this to another service - something like s3 (currently using tmp.ninja for testing). I used multipart request in Req to do so and it works well. The problem I am facing is that in Servant.Multipart the uploaded image is stored in tmp folder with a .buf extension. So when I upload it to the storage API, it is not saved as image.
createRequest content = do
multiPartBody <- reqBodyMultipart [partFileSource "files[]" content]
res <- req Network.HTTP.Req.POST (https "tmp.ninja" /: "upload.php") ( multiPartBody) jsonResponse mempty
return (responseBody res:: UploadResponse)
and this is how I run it:
runReq defaultHttpConfig $createRequest (fdPayload file)
where file is from the multipart request.
Is there a way to accomplish this without writing the file again as a png or jpeg

Related

Nodejs transform image data back to actual image

My server receives a file from a HTTP request and uploads this file to IBM Cloud Object Storage.
Moreover, the server allows to recover this file. Recovery is triggered by a get http request that should return said file.
It works fine for "basic" data format, such as text files. However, I encounter problems with more complex types such as images and the "reformating".
Image is uploaded to the datastore. The element stored is the buffer itself:
req.files[0].buffer
When getting the image back from the datastore, how can I transform it back to a readable format for my computer?
The data look like this and it is, on the server, a string:
If you are using ExpressJS you can do this:
const data = req.files[0].buffer;
res.contentType('image/jpeg'); // don't know what type is
res.send(data);

problems uploading xslx file in body of post request to .net core app on aws-lambda

I'm trying to send a post request with postman to our AWS-Lambda server. Let me first state that, when running the web-server on my laptop using the Visual studio debugger, everything works fine. When trying to do exactly the same but to the url of the AWS-Lambda i'm getting the following errors when shifting through the logging:
when uploading the normal xlsx file (it's a size of 593kb)
Split or spanned archives are not supported.
When uploading the same file but with a few worksheet removed (because i thought maybe the size is to big, which should be bs but lets try):
Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.
when uploading a random xlsx file:
Offset to Central Directory cannot be held in an Int64.
I do not know what is going on, it might have something to do with the way postman serializes the xlsx file and the way my debug session (on a windows machine) deserializes it which is different from the way AWS-Lambda deserializes it but that's just a complete guess.
I always get a 400 - Bad Request response
I'm at a loss and am hoping someone here knows what to do.
This is the method in my controller, however the problem occurs before this:
[HttpPost("productmodel")]
public async Task<IActionResult> SeedProductModel()
{
try
{
_logger.LogInformation("Starting seed product model");
var memoryStream = new MemoryStream();
_logger.LogInformation($"request body: {Request.Body}");
Request.Body.CopyTo(memoryStream);
var command = new SeedProductModelCommand(memoryStream);
var result = await _mediator.Send(command);
if (!result.Success)
{
return BadRequest(result.MissingProducts);
}
return Ok();
}
catch (Exception ex)
{
_logger.LogError(ex.Message);
return BadRequest();
}
}
postman:
we do not use api keys for our test environment
Since you are uploading binary content to API Gateway, you need to enable it through the console.
Go to API Gateway -> select your API -> Settings -> Binary Media Types -> application/octet-stream, like the image below
Save it and make sure to redeploy your API, otherwise your changes will have no effect.
To do so, select your API -> Actions -> Deploy API

how to save the binary of image of multi part form in node js

upload imageI am trying to make upload image in pure nodejs, please don't ask me why you do that cuz I am obliged to make it in pure node js I parsed the request header and I checked about image uploading or any files and I have in the console these bytes of image req data. and I**strong text** am tried to save it with fs module and the image saved successfully but when opening it gives me "this file format can't be opened"
how can i save the uploaded image ??

Generating URL from amazon s3 (streaming files from amazon s3)

I have a problem trying to download files from amazon s3. I have files stored on amazom s3 and to access these files, users need to be authenticated. I'm trying to find a way to stream files without downloading each file from amazon onto my server and then from my server to the end client. I just want to be able to stream the file direct by generating the url, can you suggest some ideas?
I know this is an old question but I have a solution for this exact scenario.
Basically, you want to present a file that is stored on Amazon S3 to the user's browser in such a way that it forces the download rather than opening in the browser window. Typically, if you store the file locally on the server then this is simple to do. But, you don't want to have to first download the file to your server from S3 just to send it over to the client, so...
You'll need the Amazon S3 SDK installed which you can get from Nuget here: https://www.nuget.org/packages/AWSSDK.S3/
Also, make sure you're referencing these namespaces:
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
Here's the code that I use to force download a remote file on Amazon S3:
byte[] buffer = new byte[4096];
GetObjectRequest getObjRequest = new GetObjectRequest {
BucketName = "**yourbucketname**",
Key = "**yourobjectkey**"
};
IAmazonS3 client = new AmazonS3Client("**youraccesskeyid**", "**yoursecretkey**", RegionEndpoint.EUWest2); //< set your own region
using (GetObjectResponse response = client.GetObject(getObjRequest))
using (Stream stream = response.ResponseStream)
{
int bytesRead = 0;
Response.AppendHeader("Content-Disposition", "attachment; filename=" + Path.GetFileName(response.Key));
Response.AppendHeader("Content-Length", response.ContentLength.ToString());
Response.ContentType = "application/force-download";
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0 && Response.IsClientConnected)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
Response.OutputStream.Flush();
buffer = new byte[4096];
}
}
As always, I'm sure there are better ways of accomplishing this, but this code works for me.
AWS provides SDK for .Net which will let them download (and upload files).
For example here: http://ceph.com/docs/master/radosgw/s3/csharp/
A quick google should give you the answer. If there is something specific which you are unable to do, then please explain your question a little more.

Accept-Encoding: gzip: content is gzip-compressed twice

I'm using IIS to serve a static file. I've configured IIS (using the GUI) to enable both static and dynamic compression.
I'm using HttpClient to download it, as follows:
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Get, requestUri);
request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
var response = client.SendAsync(request).Result;
var stream = new FileStream("foo", FileMode.Create);
response.Content.CopyToAsync(stream)
.ContinueWith(t => stream.Close())
.Wait();
I'm inspecting the traffic using Fiddler, and I can see that the first one or two responses are not compressed. I assume that IIS is compressing the file in the background and caching it somewhere. The file written to disk is about 14MB (expected).
Later requests are compressed. I can see a Content-Encoding: gzip header in the response, and the file that is downloaded is about 360KB. It's a gzip file, as identified by Cygwin's file command.
However, when I use gzip -d to decompress it, I end up with a 660KB file which is, itself a gzip-compressed file, also identified by file.
If I decompress that file, I get back the 14MB file that I was expecting.
So: why is my file being compressed twice?

Resources