Amazon S3 PUT throws "SignatureDoesNotMatch" - node.js

This AWS security stuff is driving me nuts. I'm trying to upload some binary files from a node app using knox. I keep getting the infamous SignatureDoesNotMatch error with my key/secret combination. I traced it down to this: with e.g. Transmit, I can access the bucket by connecting to s3.amazonaws.com, but I cannot access it via the virtual subdomain mybucket.s3.amazonaws.com. (When I try to access the bucket with the s3.amazonaws.com/mybucket syntax, I get an error saying that only the subdomain style is allowed.)
I have tried setting the bucket policy to explicitly allow PUT from the respective user, but that had no effect. Can anyone please shed some light on how I can enable uploading of files from one specific AWS user?

After a lot of trial and error, I narrowed it down to a couple of issues. I'm not entirely sure which one ultimately fixed it, but here are a few things you might want to try:
make sure you are setting the right datacenter. In my case, this looked like this:
knox.createClient({
key: this.config.key
, secret: this.config.secret
, bucket: this.config.bucket
, region: 'us-west-2' // cause my bucket is supposed to be in oregon
});
Check your PUT headers. In my case, the Content-Type was accidentally set to undef which caused issues:
var headers = {
'x-amz-acl': 'public-read' // if you want anyone to be able to download the file
};
if (filesize) headers['Content-Length'] = filesize;
if (mime) headers['Content-Type'] = mime;

Related

PermanentRedirect while generating pre signed url

I am having an issue while creating a pre signed url from aws s3 using aws-sdk in nodejs. It gives me PermanentRedirect The bucket you are attempting to access must be addressed using the specified endpoint.
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'test123', secretAccessKey: 'test123'})
AWS.config.update({region: 'us-east-1'})
const myBucket = 'test-bucket'
const myKey = 'test.jpg'
const signedUrlExpireSeconds = 60 * 60
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
How I can remove this error to get pre signed url working. Also I need to know what is a purpose of Key.
1st - what is your region of the bucket? S3 is global service yet each bucket has region, while creating the bucket you must select it.
2nd - when working with S3 not in N.Virginia region there could be situations when internal aws SSL/DNS is not in sync yet. I had this issue multiple times, can't find exact docs related to this but issue is from nature of redirects, not found or no access. Then after 4-12h it starts to just work. What i happen to dig out about these issues is something related to internal AWS SSL/DNS related to S3 buckets that are not in n.virginia region. So could be it.
3rd - If you re-created buckets multiple times and re-using same name. Bucket name is global, even if bucket is regional. So could be again related to 2nd scenarios when previously within last 24h bucket was actually on different region and now AWS's internal DNS/SSL haven't synced yet.
p.s. Key is object's key, any object inside bucket has key. On aws console you can navigate "key" which looks like path to file, but it's not a path to file. S3 has no concept of directories like hard drives. Any path to file is a key of object. AWS console just splits object's key by / and displays as directories to have better UX while navigating the UI.

How do we use AWS/S3?

I need some simple example to get started with AWS/S3 usage.
Here is the situation; an iOS app of mine has been transfered from Parse.com to Parse-Server / Heroku.
All is working fine, but I will at some point need file storage for images or sound files.
I have already followed this and configured an S3Adapter.
My problem now is : "How to use it?"
I would like to find some sample code using this S3Adapter that I just configured to save something and retrieve it.
If you already configured S3 in your parse server and provide all the relevant details like bucket, keys etc. the next thing is to test it and check that parse really store your files on S3 and not on GridStore (which is the default).
In order to test it please go through the following steps:
Open your index.js file which located under the root folder of your parse server project and check that your files adapter is S3. It should look something like this (from parse server wiki):
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
appId: process.env.APP_ID || 'APPLICATION_ID',
masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
...
filesAdapter: new S3Adapter(
"S3_ACCESS_KEY",
"S3_SECRET_KEY",
"S3_BUCKET", {
directAccess: true
}
),
...
});
Next you need to save some file in your iOS client side. You need to create a new PFFile and just call the saveInBackground method in order to save this file. before saving the file parse-server will check if you provide custom files adapter and if you did it will try to use it, if not it will go to the default (GridStore on MongoDB). So your iOS code should look like the following:
objective c
NSData * imageData = UIImagePNGRepresentation(image);
PFFile * imageFile = [PFFile fileWithName: #"image.png"
data: imageData
];
[imageFile saveInBackground];
swift
let imageData = UIImagePNGRepresentation(image)
let imageFile = PFFile(name:"image.png", data:imageData)
imageFile.saveInBackground()
After the file is being saved you can go to your Bucket in AWS and check if the file has been added there.
Hope it helps. If you need more info please let me know.

Gun.js why do I get the error "You have no persistence layer to save to error"

I'm trying out gun.js I have it installed as a node.js project, I have configured the amazon S3 bucket through the dotenv and I have tried adding a data.json file and still I cant get gun.js to save the file locally or to he S3 bucket.
I know its early days for gun, but I get the feeling I'm missing something obvious.
I'm expecting to find a .json file in he local file system and or in the S3 bucket but I get neither.
require('dotenv').config();
var Gun = require('gun');
var gun = Gun({
file: 'data.json', // local testing and development
s3: {
key: process.env.AWS_KEY, // AWS Access Key
secret: process.env.AWS_SECRET, // AWS Secret Token
bucket: process.env.AWS_BUCKET // The bucket you want to save into
}
});
gun.put({ hello: 'world' }).key('my/first/data');
#bill Just noticed this now, sorry for the late answer. Thanks to #paul-w for notifying me of this and his response earlier today.
This question and answer assumes you are running a version EARLIER than v0.4.x!
If you are in NodeJS and are getting the error “You have no persistence layer to save to”, it means the default storage drivers (S3, file.js) didn't get installed or were deactivated - which is unusual as this happens automatically.
Try installing gun (again?) via npm install gun in your local NodeJS project directory, not a git clone or a copy&paste.
I can only guess, given the context you explain, that you might have copied/moved gun (like the gun.js file) into your project. The browser will work with just the single file, but NodeJS needs more - it needs the S3/file.js modules, which will be included if installed with npm or properly git cloned.
Also unlikely (since your code doesn't show this), if you happen to (this is bad) Gun({wire: {put: null, get: null}}) (or something similar) it would intentionally break the persistence drivers.
If you are in the browser and getting the error (and assuming your not overwriting the persistence drivers like in the previous paragraph), it could be because of some weird situation like you are using an old version of IE or a browser that doesn't have JSON support. Again, all these things are unlikely but I'm just wanting to be comprehensive.
Note: The above applies to the question in your title. However your actual question doesn't ask about the error, it asks about not seeing data in data.json or in S3. Answering that below.
To which #paul-w is more on track. If you are using S3 then the file.js module (data.json) automatically deactivates itself. If you are using the file.js module (data.json) then S3 does not get activated. As #paul-w mentioned, v0.4.x will support easily having multiple storage engines simultaneously. However, you should see your data in at least one or the other - unless you are getting the "no persistence layer" error, in which case you won't see your data anywhere because there isn't any persistence! But again, default persistence layers are included with gun by default (unless installation was incorrect, or you explicitly overwrite them - both unusual things).
I hope this answers your question. Sorry I didn't see it till now. Please let me know if this works, and also join the conversation at https://gitter.im/amark/gun . Thank you for helping start the stackoverflow questions! We need more of these!
I think Mark is going to answer this more officially, but the quick answer is that in gun.js 0.3 (current) there is a single gun server peer or storage target, and when you run gun as a server (e.g. from node.js rather than a browser), S3 is preferred, if S3 credentials are specified. But gun is also saving your data changes in browser memory, or localStorage (up to the browser limit of 5MB), and S3 is there for a more permanent storage.
So in the example above, I think the problem is that the file entry will only be used if there is a problem saving changes to S3, and that's why you don't see the new data going there. Maybe try putting an error in the S3 credentials (e.g. add an 'x' for now) and see if it starts using the file path instead.
In gun.js 0.4 there are plans to make use of all peers specified in the constructor or dynamically, but that feature isn't here yet.
(And I probably butchered that answer, but hopefully Mark can correct any inaccuracies in this. I'm new to gun.js but had the same question.)

With Nodejs, why can I read from one Azure blob container but not from another?

I have a nodejs app running as a Worker Role in Azure Cloud Services where I have two blob storage containers that the app will read from when responding to certain user requests.
Here's my setup:
Using the azure-storage package to interface with my blob storage.
Two containers, each holding files of different types that the user may ask for at some point.
And I use the following code to stream the files to the HTTP response:
exports.getBlobToStream = function(containerName, fileName, res) {
var blobService = azure.createBlobService();
blobService.getBlobProperties(containerName, fileName, function(error, properties, status){
if(error || !status.isSuccessful)
{
res.header('Content-Type', "text/plain");
res.status(404).send("File " + fileName + " not found");
}
else
{
res.header('Content-Type', properties.contentType);
res.header('Content-Disposition', 'attachment; filename=' + fileName);
blobService.createReadStream(containerName, fileName).pipe(res);
}
});
};
One important
In the past I've had no issues reading from either container. In my research on the problemI've found an identical (but outdated) issue on the all-encompassing azure-sdk-for-node here https://github.com/Azure/azure-sdk-for-node/issues/434. The solution that fixed that problem also fixed mine, but I can't understand why. Particularly when I can read from the other container from within the same app and using the same code without any issues.
I can live with the solution but want to understand what's going on. Any thoughts or suggestions?
#winsome, thanks for raising this issue to us. If you set EMULATED, it means the code will call the local storage emulator other than the real storage account and expected to fail. Regarding the one container works under EMULATED, a guess is your local storage emulator also has a same container. Please check.

Setting Metadata in Google Cloud Storage (Export from BigQuery)

I am trying to update the metadata (programatically, from Python) of several CSV/JSON files that are exported from BigQuery. The application that exports the data is the same with the one modifying the files (thus using the same server certificate). The export goes all well, that is until I try to use the objects.patch() method to set the metadata I want. The problem is that I keep getting the following error:
apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/<bucket>/<file>?alt=json returned "Forbidden">
Obviously, this has something to do with bucket or file permissions, but I can't manage to get around it. How come if the same certificate is being used in writing files and updating file metadata, i'm unable to update it? The bucket is created with the same certificate.
If that's the exact URL you're using, it's a URL problem: you're missing the /o/ between the bucket name and the object name.

Resources