Amazon S3 403 Forbidden Error for KML files but not JPG files - node.js

I am able to successfully upload (put object) jpg files to S3 with a particular code path, but receive a 403 forbidden error when using the same code path to upload a KML file. I am not restricting file types explicitly with "bucket policy," but feel that this must somehow be tied to bucket policy or CORS configuration.

I was using code based off the Heroku tutorial for uploading images to Amazon S3. The issue ended up being that the '+' symbol in the appropriate mime type is "application/vnd.google-earth.kml+xml" and the + symbol was being replaced with a space when fetching the file-type query parameter for our own S3 endpoint to generate signed requests. We were able to quickly fix this by just forcing the ContentType to be "application/vnd.google-earth.kml+xml" for all kml files going to our endpoint for generating signed S3 requests.

Related

How to read and write data in spark via an S3 access point

I am attempting to use an S3 access point to store data in an S3 bucket. I have tried saving as I would if I had access to the bucket directly:
someDF.write.format("csv").option("header","true").mode("Overwrite")
.save("arn:aws:s3:us-east-1:000000000000:accesspoint/access-point/prefix/")
This returns the error
IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: "arn:aws:s3:us-east-1:000000000000:accesspoint/access-point/prefix/"
I havnt been able to find any documentation on how to do this. Are access points not supported? Is there a way to set up the access point as a custom data source?
Thank you
The problem is that you have provided the arn instead of the s3 URL. The URL would be something like this (assuming accesspoint is the bucket name):
s3://accesspoint/access-point/prefix/
There is a button in the AWS console if you are in the object or prefix, top right Copy S3 URL

Amazonka: How to generate S3:// uri from Network.AWS.S3.Types.Object?

I've been using turtle to call "aws s3 ls" and I'm trying to figure out how to replace that with amazonka.
Absolute s3 urls were central to how my program worked. I now know how to get objects and filter them, but I don't know how to convert an object to an S3 url to integrate with my existing program.
I came across the getFile function and tried downloading a file from s3.
Perhaps I had something wrong, but it didn't seem like just the S3 Bucket and S3 Object key were enough to download a given file. If I'm wrong about that I need to double check my configuration.

Amazon s3 bucket image access issue: Access Denied

I am getting the following error on putting the image src
I am using following modules to upload an image in node
aws = require('aws-sdk'),
multer = require('multer'),
multerS3 = require('multer-s3'),
Image is uploading successfully in the bucket but when I put the same url in <img src="https://meditationimg.s3.us-east-2.amazonaws.com/profilepic/1507187706799Penguins.jpg" /> it returns the above error
Anyone who knows the solution??
No Such Key is S3's way of saying "404 Not Found."
The request was authorized and syntactically valid, but there's no file in the bucket at the specified path.
You may want to inspect the contents of your bucket from the AWS console.
Make sure you access image using the same case as it was uploaded and is stored on S3.(Generally it should be lower case.)

Setting Metadata in Google Cloud Storage (Export from BigQuery)

I am trying to update the metadata (programatically, from Python) of several CSV/JSON files that are exported from BigQuery. The application that exports the data is the same with the one modifying the files (thus using the same server certificate). The export goes all well, that is until I try to use the objects.patch() method to set the metadata I want. The problem is that I keep getting the following error:
apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/<bucket>/<file>?alt=json returned "Forbidden">
Obviously, this has something to do with bucket or file permissions, but I can't manage to get around it. How come if the same certificate is being used in writing files and updating file metadata, i'm unable to update it? The bucket is created with the same certificate.
If that's the exact URL you're using, it's a URL problem: you're missing the /o/ between the bucket name and the object name.

AWS S3 Returns 200ok parser fails if ContentEncoding: 'gzip'

My first deploy to AWS.
The files are all in place, and index.html loads.
There are two files in a subdir, one .js and once .css.
They both return 200 but fail to load. Chrome sais it's the 'parser'.
After trying a few things, I noted that this property is causing it: ContentEncoding: "gzip".
If I remove this property the files are found correctly.
Am I using this property incorrectly?
I am using the Node AWS SDK via this great project: https://github.com/MathieuLoutre/grunt-aws-s3
You can witness this behavior for yourself at http://tidepool.co.s3-website-us-west-1.amazonaws.com/
If you specify Content-Encoding: gzip then you need to make sure that the content is actually gzipped on S3.
From what I see in this CSS file:
http://tidepool.co.s3-website-us-west-1.amazonaws.com/08-26_6483218-dirty/all-min.css
the actual content is not gzipped, but the Content-Encoding: gzip header is present.
Also keep in mind that S3 is unable to compress your content on the fly based on the Accept-Encoding header in the request. You can either store it uncompressed and it will work for all browsers/clients or store it in a compressed format (gzip/deflate) and it will only work on some clients that can work with compressed content.
You could also take a look at the official AWS SDK for Node.js.

Resources