I'm using AWS S3 to host a static webpage, almost all assets are gzipped before being uploaded.
During the upload the "content-encoding" header is correctly set to "gzip" (and this also reflects when actually loading the file from AWS).
The thing is, the files can't be read and are still in gzip format although the correct headers are set...
The files are uploaded using npm s3-deploy, here's a screenshot of what the request looks like:
and the contents of the file in the browser:
If I upload the file manually and set the content-encoding header to "gzip" it works perfectly. Sadly I have a couple hundred files to upload for every deployment and can not do this manually all the time (I hope that's understandable ;) ).
Has anyone an idea of what's going on here? Anyone worked with s3-deploy and can help?
I use my own bash script for S3 deployments, you can try to do it:
webpath='path'
BUCKET='BUCKETNAME'
for file in $webpath/js/*.gz; do
aws s3 cp "$file" s3://"$BUCKET/js/" --content-encoding 'gzip' --region='eu-west-1'
done
Related
I am using multer npm library to read files from postman and I am getting the file and its details in my node js code(checked via logging req.file), but my concern is I don't want the file to get stored in my local machine, I just want to extract the data from the file and process further for my requirements.
Is this possible or anyone can suggest me some solutions to this.
Thanks in advance
As I read multer libraries, it streams the uploaded file in the disk, and then it processes the file.
If you only want to use the properties like mime/type or originalfilename of the file you can use multer and in the storage options after getting the uploaded file properties, use cb(null, false) to prevent storing the file, but if you want to process the file, you can remove it from the disk after your process is done.
I'm trying to create a .zip file by passing the returned body of an HTTP GET request to SharePoint's Create File.
Body is:
{
"$content-type": "application/zip",
"$content": "UEsDBBQACA...="
}
Shouldn't this just work? The docs only define the Create File field as "Content of the file." but that's not super informative...
I believe I've done this before with a file that was application/pdf and it worked. Unfortunately, I can't find that Logic App (I think it may have been an experiment I've since deleted).
I should note that the Create File action does create a valid .zip file, in that it's not corrupt, but archive is empty. It's supposed to contain a single .csv file.
I tried decoding the Base64 content and it's definitely binary data.
Any idea where I'm going wrong?
I test with Postman and when I use the form-data way to POST the request, I found the .zip file couldn't be open. Then I check the Logic App run history and I find the problem is if just use the triggerbody() as the file content it will fail.
This is because the triggerbody() not just have the $content, so I change the expression to triggerBody()['$multipart'][0]['body'] then it works and the .zip file is full.
I've been using turtle to call "aws s3 ls" and I'm trying to figure out how to replace that with amazonka.
Absolute s3 urls were central to how my program worked. I now know how to get objects and filter them, but I don't know how to convert an object to an S3 url to integrate with my existing program.
I came across the getFile function and tried downloading a file from s3.
Perhaps I had something wrong, but it didn't seem like just the S3 Bucket and S3 Object key were enough to download a given file. If I'm wrong about that I need to double check my configuration.
I am able to successfully upload (put object) jpg files to S3 with a particular code path, but receive a 403 forbidden error when using the same code path to upload a KML file. I am not restricting file types explicitly with "bucket policy," but feel that this must somehow be tied to bucket policy or CORS configuration.
I was using code based off the Heroku tutorial for uploading images to Amazon S3. The issue ended up being that the '+' symbol in the appropriate mime type is "application/vnd.google-earth.kml+xml" and the + symbol was being replaced with a space when fetching the file-type query parameter for our own S3 endpoint to generate signed requests. We were able to quickly fix this by just forcing the ContentType to be "application/vnd.google-earth.kml+xml" for all kml files going to our endpoint for generating signed S3 requests.
My first deploy to AWS.
The files are all in place, and index.html loads.
There are two files in a subdir, one .js and once .css.
They both return 200 but fail to load. Chrome sais it's the 'parser'.
After trying a few things, I noted that this property is causing it: ContentEncoding: "gzip".
If I remove this property the files are found correctly.
Am I using this property incorrectly?
I am using the Node AWS SDK via this great project: https://github.com/MathieuLoutre/grunt-aws-s3
You can witness this behavior for yourself at http://tidepool.co.s3-website-us-west-1.amazonaws.com/
If you specify Content-Encoding: gzip then you need to make sure that the content is actually gzipped on S3.
From what I see in this CSS file:
http://tidepool.co.s3-website-us-west-1.amazonaws.com/08-26_6483218-dirty/all-min.css
the actual content is not gzipped, but the Content-Encoding: gzip header is present.
Also keep in mind that S3 is unable to compress your content on the fly based on the Accept-Encoding header in the request. You can either store it uncompressed and it will work for all browsers/clients or store it in a compressed format (gzip/deflate) and it will only work on some clients that can work with compressed content.
You could also take a look at the official AWS SDK for Node.js.