Google Cloud Storage get signedUrl from CDN npm - node.js

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?

For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;

The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL

From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

Related

Azure blob SAS URL 403 in browser img tag but downloads fine

I have an Azure storage account where i store blobs in containers,
I am generating SAS URL in order to show the images in my react web app,
when pasting the URL to the browser everything works fine and the image is being downloaded,
but when I try to display it as an img tag in the browser I am receiving the following issue:
Failed to load resource: the server responded with a status of 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
It started to happen a day ago and before it worked fine.
A sample form of URL that I generate is:
https://{{storageName}}.blob.core.windows.net/{{ContainerName}}/5261a483-e131-40f9-90b2-91657b1daec7.png?sv=2020-10-02&st=2022-01-01T10%3A58%3A20Z&se=2022-01-01T11%3A01%3A27Z&sr=b&sp=r&sig=vN2k3%2BD04BDwnSIDx%2F%2FDyGfUt1UIIoivfOzdfh0kWG0%3D
And the code I am using to generate it is:
try {
//get extesnsion of promo image
var containerName = container;
const client = blobServiceClient.getContainerClient(containerName)
if (!containerName)
return '';
//get extesnsion of promo image
const blobName = imageName;
const blobClient = client.getBlobClient(blobName);
const blobSAS = generateBlobSASQueryParameters({
containerName,
blobName,
permissions: BlobSASPermissions.parse("r"),
startsOn: new Date(),
expiresOn: new Date(new Date().valueOf() + 186400)
},
cerds
).toString();
// await sleep(0);
const sasUrl = blobClient.url + "?" + blobSAS;
// console.log(sasUrl);
return sasUrl;
}
catch (err) {
console.log(err)
return '';
}
Why is this happening? how can it be that from the browser URL I always get a good response and can download the image and from the img tag i am getting a 403?
On work around try with solution
Solution 1) check if your storage account is firewall enabled .
Azure Portal -> Storage Account -> Networking -> Check Allow Access From (All Networks / Selected Networks)
If it is "Selected Networks" - It means the storage account is firewall enabled.
If the storage account is firewall enabled, check your app is whitelisted to access.
For more details refer this document: https://learn.microsoft.com/en-us/answers/questions/334786/azure-blob-storage-fails-to-authenticate-34make-su.html
Solution 2) Check with SAS( shared access signature) that expired. Try with updating the new one

Node.js reading a blob with azure and creating a SAS token

So I am currently writing some code that gets a container and then selects a blob and makes a SAS token. which all currently work but I get a error when I try to open the link.
The error being given is this.
AuthenticationFailed
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:somethingsomething
The specified signed resource is not allowed for the this resource level
const test = () => {
const keyCredit = new StorageSharedKeyCredential('storageaccount', 'key')
const sasOptions = {
containerName: 'compliance',
blobName: 'swo_compliance.csv',
};
sasOptions.expiresOn = new Date(new Date().valueOf() + 3600 * 1000);
sasOptions.permissions = BlobSASPermissions.parse("r");
const sasToken = generateBlobSASQueryParameters(sasOptions, keyCredit).toString();
console.log(`SAS token for blob container is: url/?${sasToken}`);
return `url/?${sasToken}`;
}
I tried to reproduce the scenario in my system able to download the blob using the sas token
While you returning the return url/?${sasToken}; in your code remove the “/” just give the
the return url?${sasToken};
Example URL : https://StorageName.blob.core.windows.net/test/test.txt?SASToken
I tried in my system able to download blob

AccessDenied when generate signed url for Amazon s3 using aws-cloudfront-sign and node.js

I created a bucket on s3 and added a HTML file, after this I created a Cloud Front key pair using my root user and added a Cloud Front Distribution for that bucket. Tried to access the object using that distribution and it worked, than I restricted access to the bucket using Behaviour and selected "self".
Finally I generated a signed url from node js and tested it using Postman.
The problem is that I always get AccessDenied.
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
Here is my code.
const cfsign = require("aws-cloudfront-sign");
var signingParams = {
keypairId: process.env.PUBLIC_KEY,
privateKeyPath: "./aws/Y3PA.pem",
expireTime: (new Date().getTime() + 999999999)
};
// Generating a signed URL
signedUrl = () => {
console.log("url created " + process.env.PUBLIC_KEY);
return cfsign.getSignedUrl(
"xxxx.cloudfront.net/test.html",
signingParams
);
}
The scheme is part of the URL that is required as input to the signature algorithm, so your error is lilely to be here:
cfsign.getSignedUrl("xxxx.cloudfront.net/...
Instead of that, you need this:
cfsign.getSignedUrl("https://xxxx.cloudfront.net/...

Properly enabling security for filepicker.io in Meteor

Filepicker by default allows pretty much everybody to add files to your S3 bucket who was clever enough to copy your API key out of the client code and luckily also offers a security option with expiring policies.
But I have no idea how to implement this in Meteor.js. Tried back and forth, installing meteor-crypto-base package, trying to generate the hashes on the server, tried RGBboy's urlsafe-base64 algorithm on https://github.com/RGBboy/urlsafe-base64. But I just do not get any further. Maybe someone can help! Thank you in advance.
This is an example of how to do filepicker signed URLs in meteor, based on the documentation here:
var crypto = Npm.require('crypto');
var FILEPICKER_KEY = 'Z3IYZSH2UJA7VN3QYFVSVCF7PI';
var BASE_URL = 'https://www.filepicker.io/api/file';
Meteor.methods({
signedUrl: function(handle) {
var expiry = Math.floor(new Date().getTime() / 1000 + 60 * 60);
var policy = new Buffer(JSON.stringify({
handle: handle,
expiry: expiry
})).toString('base64');
var signature = crypto
.createHmac('sha256', FILEPICKER_KEY)
.update(policy)
.digest('hex');
return BASE_URL + "/" + handle +
"?signature=" + signature + "&policy=" + policy;
}
});
Note this will need to exist somewhere inside of your server directory so you don't ship the key to the client. To demonstrate that it works, on the client side you can call it like so:
Meteor.call('signedUrl', 'KW9EJhYtS6y48Whm2S6D', function(err, url){console.log(url)});
If everything worked, you should see a photo when you visit the returned URL.

Using JavaScript to put file into blob storage results in 403 (Server failed to authenticate request)

I followed this great article by Gaurav Mantri in order to upload files using HTML5/Javascript directly into blob storage.
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/
However I am finding that during the upload portion this portion of his code fails with the 403 error.
And the funny thing is, this happens randomly. Sometimes the upload actually works and everything completes successfully however majority of the time it is failing with the 403 error.
One thing to note: I am hoping that CORS support will be added soon to Azure however for time being I am using Chrome (with the chrome.exe --disable-web-security option) to get around the issue.
PUT
https://mystorage.blob.core.windows.net/asset-38569007-3316-4350…Giv17ye4bocVWDbA/EQ+riNiG3wEGrFucbd1BKI9E=&comp=block&blockid=YmxvY2stMA==
403 (Server failed to authenticate the request. Make sure the value of
Authorization header is formed correctly including the signature.)
$.ajax({
url: uri,
type: "PUT",
data: requestData,
processData: false,
beforeSend: function(xhr) {
xhr.setRequestHeader('x-ms-blob-type', 'BlockBlob');
xhr.setRequestHeader('Content-Length', requestData.length);
},
success: function (data, status) {
console.log(data);
console.log(status);
bytesUploaded += requestData.length;
var percentComplete = ((parseFloat(bytesUploaded) / parseFloat(selectedFile.size)) * 100).toFixed(2);
$("#fileUploadProgress").text(percentComplete + " %");
uploadFileInBlocks();
},
error: function(xhr, desc, err) {
console.log(desc);
console.log(err);
}
});
I have put a 30-sec delay after creating the asset/locator/file in Azure before actually starting the upload portion in order to give time for the Locator to be propagated.
Any suggestion to what I could be missing?
Many thanks to Gaurav for pointing me in the direction of the issue.
It turns out that I was making JSON calls to the server which would create the assets/locators/policies and then return the upload uri back.
However my upload uri was of type Uri and when JSON serialized it, it didn't properly encode it.
After changing my uri object (on the server) to a string (and calling uploaduri = (new UriBuilder(theuri)).ToString(); ) the uri returned back to the web client was properly encoded and I no longer got the 403 errors.
So as a heads up to others, if you get this same issue, you may want to look at the encoding of your upload uri.
Gaurav here's the code I use to create the empty asset (w/ locator and file):
/// <summary>
/// Creates an empty asset on Azure and prepares it to upload
/// </summary>
public FileModel Create(FileModel file)
{
// Update the file model with file and asset id
file.FileId = Guid.NewGuid().ToString();
// Create the new asset
var createdAsset = this.Context.Assets.Create(file.AssetName.ToString(), AssetCreationOptions.None);
// Create the file inside the asset and set its size
var createdFile = createdAsset.AssetFiles.Create(file.Filename);
createdFile.ContentFileSize = file.Size;
// Create a policy to allow uploading to this asset
var writePolicy = this.Context.AccessPolicies.Create("Policy For Copying", TimeSpan.FromDays(365 * 10), AccessPermissions.Read | AccessPermissions.Write | AccessPermissions.List);
// Get the upload locator
var destinationLocator = this.Context.Locators.CreateSasLocator(createdAsset, writePolicy);
// Get the SAS Uri and save it to file
var uri = new UriBuilder(new Uri(destinationLocator.Path));
uri.Path += "/" + file.Filename;
file.UploadUri = uri.Uri;
// Return the updated file
return file;
}

Resources