Node.js reading a blob with azure and creating a SAS token - node.js

So I am currently writing some code that gets a container and then selects a blob and makes a SAS token. which all currently work but I get a error when I try to open the link.
The error being given is this.
AuthenticationFailed
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:somethingsomething
The specified signed resource is not allowed for the this resource level
const test = () => {
const keyCredit = new StorageSharedKeyCredential('storageaccount', 'key')
const sasOptions = {
containerName: 'compliance',
blobName: 'swo_compliance.csv',
};
sasOptions.expiresOn = new Date(new Date().valueOf() + 3600 * 1000);
sasOptions.permissions = BlobSASPermissions.parse("r");
const sasToken = generateBlobSASQueryParameters(sasOptions, keyCredit).toString();
console.log(`SAS token for blob container is: url/?${sasToken}`);
return `url/?${sasToken}`;
}

I tried to reproduce the scenario in my system able to download the blob using the sas token
While you returning the return url/?${sasToken}; in your code remove the “/” just give the
the return url?${sasToken};
Example URL : https://StorageName.blob.core.windows.net/test/test.txt?SASToken
I tried in my system able to download blob

Related

Azure blob SAS URL 403 in browser img tag but downloads fine

I have an Azure storage account where i store blobs in containers,
I am generating SAS URL in order to show the images in my react web app,
when pasting the URL to the browser everything works fine and the image is being downloaded,
but when I try to display it as an img tag in the browser I am receiving the following issue:
Failed to load resource: the server responded with a status of 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
It started to happen a day ago and before it worked fine.
A sample form of URL that I generate is:
https://{{storageName}}.blob.core.windows.net/{{ContainerName}}/5261a483-e131-40f9-90b2-91657b1daec7.png?sv=2020-10-02&st=2022-01-01T10%3A58%3A20Z&se=2022-01-01T11%3A01%3A27Z&sr=b&sp=r&sig=vN2k3%2BD04BDwnSIDx%2F%2FDyGfUt1UIIoivfOzdfh0kWG0%3D
And the code I am using to generate it is:
try {
//get extesnsion of promo image
var containerName = container;
const client = blobServiceClient.getContainerClient(containerName)
if (!containerName)
return '';
//get extesnsion of promo image
const blobName = imageName;
const blobClient = client.getBlobClient(blobName);
const blobSAS = generateBlobSASQueryParameters({
containerName,
blobName,
permissions: BlobSASPermissions.parse("r"),
startsOn: new Date(),
expiresOn: new Date(new Date().valueOf() + 186400)
},
cerds
).toString();
// await sleep(0);
const sasUrl = blobClient.url + "?" + blobSAS;
// console.log(sasUrl);
return sasUrl;
}
catch (err) {
console.log(err)
return '';
}
Why is this happening? how can it be that from the browser URL I always get a good response and can download the image and from the img tag i am getting a 403?
On work around try with solution
Solution 1) check if your storage account is firewall enabled .
Azure Portal -> Storage Account -> Networking -> Check Allow Access From (All Networks / Selected Networks)
If it is "Selected Networks" - It means the storage account is firewall enabled.
If the storage account is firewall enabled, check your app is whitelisted to access.
For more details refer this document: https://learn.microsoft.com/en-us/answers/questions/334786/azure-blob-storage-fails-to-authenticate-34make-su.html
Solution 2) Check with SAS( shared access signature) that expired. Try with updating the new one

How to read Azure Blob Url?

I am creating list of blog posts in react and express/ Azure SQL db. I am able to use the Azure blob storage to store the image associated to the post. I am also able to get the blob url and I am storing that in my SQL db. However when I want to read the url directly it threw an error resource not found. After searching docs and other stackoverflow answers I could infer that it has something to do with SAS token. Can anyone explain what would be the better way to approach this?
https://yourdomain.blob.core.windows.net/imagecontainer/yourimage.png
Below is the nodejs code.
router.post('/image', async function (req, res) {
try {
console.log(req.files.files.data);
const blobName = 'test' + uuidv1() + '.png';
const containerClient = blobServiceClient.getContainerClient(containerName);
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.upload(req.files.files.data, req.files.files.data.length)
res.send({tempUrl:blockBlobClient.url});
} catch (e) {
console.log(e);
}
})
However when I want to read the url directly it threw an error
resource not found.
Most likely you're getting this error because the blob container containing the blob has a Private ACL and because of that anonymous access is disabled. To enable anonymous access, please change the blob container's ACL to Blob or Public and that will solve this problem.
If you can't (or don't want to) change the blob container's ACL, other option would be to create a Shared Access Signature (SAS) on a blob. A SAS essentially gives time and permission bound access to a blob. For your needs, you would need to create a short-lived SAS token with just Read permission.
To generate a SAS token, you will need to use generateBlobSASQueryParameters method. Once you create a SAS token, you will need to append it to your blob's URL to get a SAS URL.
Here's the sample code to do so. It makes use of #azure/storage-blob node package.
const permissions = new BlobSASPermissions();
permissions.read = true;//Set read permission only.
const currentDateTime = new Date();
const expiryDateTime = new Date(currentDateTime.setMinutes(currentDateTime.getMinutes()+5));//Expire the SAS token in 5 minutes.
var blobSasModel = {
containerName: 'your-blob-container-name',
blobName: 'your-blob-name',
permissions: permissions,
expiresOn: expiryDateTime
};
const sharedKeyCredential = new StorageSharedKeyCredential('your-storage-account-name', 'your-storage-account-key');
const sasToken = generateBlobSASQueryParameters(blobSasModel, sharedKeyCredential);
const sasUrl = blockBlobClient + "?" + sasToken;//return this SAS URL to the client.

Google Cloud Storage get signedUrl from CDN npm

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

Create azure cdn endpoint for azure container

I need to create Azure CDN Endpoint for Azure Container. I am using below code to do so.
Endpoint endpoint = new Endpoint() {
IsHttpAllowed = true,
IsHttpsAllowed = true,
Location = this.config.ResourceLocation,
Origins = new List<DeepCreatedOrigin> { new DeepCreatedOrigin(containerName, string.Format(STORAGE_URL, storageAccountName)) },
OriginPath = "/" + containerName,
};
await this.cdnManagementClient.Endpoints.CreateAsync(this.config.ResourceGroupName, storageAccountName, containerName, endpoint);
All the information I provide is correct and Endpoint is getting created successfully. But when I try to access any blob inside it. It is giving an InvalidUrl error.
However the weird thing is If I create the same endpoint using same values through portal, I am able to access and download blobs.
Anyone please let me know what am I doing wrong in my code? Do I need to pass any extra parameters?
As far as I know, if you want to create a storage CDN in code, you need set the OriginHostHeader value as your storage account URL.
More details, you could refer to below codes:
// Create CDN client
CdnManagementClient cdn = new CdnManagementClient(new TokenCredentials(token))
{ SubscriptionId = subscriptionId };
//ListProfilesAndEndpoints(cdn);
Endpoint e1 = new Endpoint()
{
// OptimizationType = "storage",
Origins = new List<DeepCreatedOrigin>() { new DeepCreatedOrigin("{yourstoragename}-blob-core-windows-net", "{yourstoragename}.blob.core.windows.net") },
OriginHostHeader = "{yourstoragename}.blob.core.windows.net",
IsHttpAllowed = true,
IsHttpsAllowed = true,
OriginPath=#"/foo2",
Location = "EastAsia"
};
cdn.Endpoints.Create(resourcegroup, profilename, enpointname, e1);
Besides, I suggest you could generate SAS token to directly access the blob file by URL.

How do i find bloburl with shared access token expired?

i have written this below code to get the blob url with cache expiry token, actually have set 2 hours to expire the blob url,
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
CloudBlockBlob blockBlob = container.GetBlockBlobReference("blobname");
//Create an ad-hoc Shared Access Policy with read permissions which will expire in 2 hours
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(2),
};
SharedAccessBlobHeaders headers = new SharedAccessBlobHeaders()
{
ContentDisposition = string.Format("attachment;filename=\"{0}\"", "blobname"),
};
var sasToken = blockBlob.GetSharedAccessSignature(policy, headers);
blobUrl = blockBlob.Uri.AbsoluteUri + sasToken;
using this above code i get the blob url with valid expiry token, now i want to check blob url is valid or not in one client application.
I tried web request and http client approach by passing the URL and get the response status code. if the response code is 404 then I assuming the URL is expired if not the URL is still valid,but this approach taking more time.
Please suggest me any other way.
I tried running code very similar to yours, and I am getting a 403 error, which is actually what is expected in this case. Based on your question, I am not sure whether the 403 is more helpful to you than the 404. Here is code running in a console application that returns a 403:
class Program
{
static void Main(string[] args)
{
string blobUrl = CreateSAS();
CheckSAS(blobUrl);
Console.ReadLine();
}
//This method returns a reference to the blob with the SAS, and attempts to read it.
static void CheckSAS(string blobUrl)
{
CloudBlockBlob blob = new CloudBlockBlob(new Uri(blobUrl));
//If the DownloadText() method is run within the two minute period that the SAS is valid, it succeeds.
//If it is run after the SAS has expired, it returns a 403 error.
//Sleep for 3 minutes to trigger the error.
System.Threading.Thread.Sleep(180000);
Console.WriteLine(blob.DownloadText());
}
//This method creates the SAS on the blob.
static string CreateSAS()
{
string containerName = "forum-test";
string blobName = "blobname";
string blobUrl = "";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
container.CreateIfNotExists();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName + DateTime.Now.Ticks);
blockBlob.UploadText("Blob for forum test");
//Create an ad-hoc Shared Access Policy with read permissions which will expire in 2 hours
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(2),
};
SharedAccessBlobHeaders headers = new SharedAccessBlobHeaders()
{
ContentDisposition = string.Format("attachment;filename=\"{0}\"", blobName),
};
var sasToken = blockBlob.GetSharedAccessSignature(policy, headers);
blobUrl = blockBlob.Uri.AbsoluteUri + sasToken;
return blobUrl;
}
}
There are cases in which SAS failures do return a 404, which can create problems for troubleshooting operations using SAS. The Azure Storage team is aware of this issue and in future releases SAS failures may return a 403 instead. For help troubleshooting a 404 error, see http://azure.microsoft.com/en-us/documentation/articles/storage-monitoring-diagnosing-troubleshooting/#SAS-authorization-issue.
I also ran into the same issue a few days back. I was actually expecting storage service to return a 403 error code when the SAS token has expired but storage service returns 404 error.
Given that we don't have any other option, the way you're doing it is the only viable way but it is still not correct because you could get 404 error if the blob is not present in the storage account.
Maybe you can parse "se" argument from the generated SAS, which means expiry time, e.g. "se=2013-04-30T02%3A23%3A26Z". However, since the server time might not be the same as client time, the solution may be unstable.
http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
You're using UTC time for SharedAccessExpiryTime (see "Expiry Time" in https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1#parameters-common-to-account-sas-and-service-sas-tokens).
The expiry time then is registered under the se claim in the token the value of which can be checked against current UTC time on the client side before actually using the token. This way you'd save yourself from making an extra call to the Blob storage only to find out whether the token is expired.

Resources