I am generating a shared access signature(SAS) for one of the blob containers which is v 2 using the Azure portal. I am trying to upload a file from frontend for which SAS is required. The problem is SAS is expiring every day. Is there a way to update the SAS automatically using the code or is there a way to do the authentication using Azure AD.
So Basically I have a front end where user logs in using Azure AD, now i want to utilize his session to allow him to upload to Azure storage. As he is already authorized, i feel there should be a way to generate SAS on the fly for his session.
Shared access signatures are useful for providing limited permissions to your storage account to clients that should not have the account key.
If you are the one writing data to the storage account, do so server side. If you do that, you can validate the user is logged in. If that's the case, allow your backend to write in the storage account using one of the access keys (or better yet, a managed identity.
Of course, you could have your front-end request a SAS token from the back-end, for instance from an API. This could simply be implemented, for instance using an Azure Function. And the SAS token could use near-term expiration times. In the end, you're still opening up parts of the storage account to anyone who can access the frontend.
With near-term expiration, even if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot reference a stored access policy. Near-term expiration times also limit the amount of data that can be written to a blob by limiting the time available to upload to it
Source: Using shared access signatures (SAS)
Taken from that same article:
The following code example creates an account SAS that is valid for the Blob and File services, and gives the client permissions read, write, and list permissions to access service-level APIs. The account SAS restricts the protocol to HTTPS, so the request must be made with HTTPS.
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify for your account.
const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
// Create a new access policy for the account.
SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read | SharedAccessAccountPermissions.Write | SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};
// Return the SAS token.
return storageAccount.GetSharedAccessSignature(policy);
}
Related
We are using Azure blob storage for storing unstructured content.
Our setup is as follows
Our Browser client(accessing files) -> Our Backend (our cloud platform) -> Proxy managing Azure account (our cloud platform)-> Azure blob storage.
The proxy managing the Azure account has the account credentials. It generates a SAS token and gives to the consumers like our backend. This SAS token has infinite expiry time.
Now we from our backend want to generate a pre-signed url (similar concept of S3) with an expiration time and give to the browser client. This is required since we want to download the content directly from the browser bypassing our backend for large files.
It seems the generated signed url will always have unlimited expiry time as our SAS token.
Please note we (our backend ) does not have access to the Azure storage so we cannot generate an access token.
Is there any way our problem could be solved ?
Best Regards,
Saurav
If I understand correctly, you get a SAS Token that never expires. However you would want to specify an expiry date when you use this token in your SAS URL. This is not possible.
Essentially a SAS URL for a blob is base blob URL (https://account.blob.core.windows.net/container/blob) + SAS Token.
You cannot change any of the parameters of the SAS Token when using it in SAS URL because the sig portion of SAS URL is computed based on the other parameters in your SAS Token like se, st etc. Doing so will invalidate the SAS Token.
Furthermore, you can't create a new SAS Token using another SAS Token.
Only solution to your problem is to have your proxy service create a SAS Token with predefined expiry time.
I create SAS tokens on the server on demand and send them to users so that they can upload blobs. By default, each token is set to expire in an hour. I'm also using Azure Functions for server-side processing.
var cloudStorageAccount = // create a new CloudStorageAccount
var sharedAccessAccountPolicy = new SharedAccessAccountPolicy
{
Permissions = SharedAccessAccountPermissions.Read | SharedAccessAccountPermissions.Write,
Services = SharedAccessAccountServices.Blob,
ResourceTypes = SharedAccessAccountResourceTypes.Object,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
Protocols = SharedAccessProtocol.HttpsOnly
};
var token = cloudStorageAccount.GetSharedAccessSignature(sharedAccessAccountPolicy);
What I'd like to do is to expire a SAS token once it's been successfully used once by listening to blob changes via EventGridTrigger. For example, if it took the user 10 minutes to upload a file, that token should no longer be usable. This is to prevent abuse because the API that generates SAS tokens is rate limited. If I want the user to upload only one file in an hour, I need a way to enforce this. Somebody with a fast internet connection can theoretically upload dozens of files if the token expires in an hour.
So, my question would be, is it possible to programmatically expire a token even if its expiry date has not been reached? Another alternative that would work in my scenario is to generate one-time tokens if it's possible.
I believe you can use a user delegation SAS for this. A user delegation SAS can be created and revoked programmatically.
https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas
Revoke the user delegation key
You can revoke the user delegation key by calling the Revoke User Delegation Keys operation. When you revoke the user delegation key, any shared access signatures relying on that key become invalid. You can then call the Get User Delegation Key operation again and use the key to create new shared access signatures. This approach is the quickest way to revoke a user delegation SAS.
You cannot revoke a SAS token before it's expiration data unless it's associated with a Security policy and you revoke that policy or you change the Access Key associated with the storage account. I don't think either of these ideas really applies to your case. SAS tokens are essentially self-contained and cannot be altered once issued, so you cannot expire them early.
See the Revocation section on this page for the official explanation:
https://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations#revocation
"A service SAS that is not associated with a stored access policy cannot be revoked."
Also, there are no one-time use SAS tokens and according to this feedback request, Microsoft has no plans to implement that feature: https://feedback.azure.com/forums/217298-storage/suggestions/6070592-one-time-use-sas-tokens
Your best bet is to simply keep the expiration time as short as possible for your use case. If you absolutely must limit uploads for a specific user, then you consider having the user go through a separate controlled app instead of directly to storage (like a Web API) that can be used as a gatekeeper (checking previous uploads and implementing limit logic).
I have blob storage with some resources. I provide SAS tokens to clients, and every token is generated only for specific blob to client. After some amount of time I want to rotate my account keys, thus all actual clients' tokens will be invalidated (clients do not have account key, they have only token).
I was wondering if someone had similiar case, when using REST API to Azure Storage have to provide new SAS tokens to clients after key rotation. I know that in this situation client will get 403 Unauthorize, so one option is to send another request for new token, and then retry request for resource.
Or maybe I could send back 301 Moved http code and link for REST endpoint that regenerates new token, thus client wouldn't have to have addtional knowlegde about anothoer endpoint.
Does anyone any experience with token rotation like this one?
As mentioned in the comment, due to your clients are directly accessing the blob, you wouldn't know if they got 403 error unless they tell you about the same.
If it is acceptable, you could take a look at Authorize access to Azure blobs and queues using Azure Active Directory, when it has been configured, even if you rotate the account keys, the client also can access the storage. But this feature just could apply to at least container level, not blob level, not sure if it is acceptable.
how token stores in local storage and session storage
how to genrate token
and which is secure for admin user authentication for angular app
angular authentication using token storage is secure same as session storage in browser or application
Local storage is a new feature of HTML5 that basically allows you (a web developer) to store any information you want in your user’s browser using JavaScript.
In practice, local storage is just one big old JavaScript object that you can attach data to (or remove data from).
Example:
// Considering it as a object
localStorage.userName = "highskillzz";
//or this way!
localStorage.setItem("objects", "0");
// Once data is in localStorage, it'll stay there forever until it // is removed explicitly
console.log(localStorage.userName + " has " + localStorage.objects + " number of objects.");
// Removing data from local storage is also pretty easy. Uncomment
// below lines
//localStorage.removeItem("userName");
//localStorage.removeItem("objects");
It was designed to be a simple string only key/value store that developers could use to build slightly more complex single page apps. That’s it.
In my understanding of JWT, local/session storage, and your question, using session storage to have JWT stored would be ideal as session storage is separate for each browser tab. It's just easier for a developer to manage tokens this way.
In terms of security, both local and session storage should be okay given that JWT is ephemeral.
Earlier i was getting a blob without using a sas authorization.
But now I want only those users to be able to access a blob who has the sas token.
Lets say i want to access a file at
https://storageaccount.blob.core.windows.net/sascontainer/sasblob.txt
Now i have the SAS token too. So the new url would be
https://storageaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2012-02-12&st=2013-04-12T23%3A37%3A08Z&se=2013-04-13T00%3A12%3A08Z&sr=b&sp=rw&sig=dF2064yHtc8RusQLvkQFPItYdeOz3zR8zHsDMBi4S30%3D
What do I do next so that only those with the second link can go and get the "sasblob.txt" file?
What changes do i have to make in the azure portal?
I guess the only change i have to make in the client side is the URL. I need to replace the URl without the sas token with the URl containing the sas token.
As long as the blob is private (which can be set at the container level), nobody will have access without the SAS-augmented URI. Even if you kept giving out the public URI, it wouldn't work if the container was marked as private.
Also, in your example, you've created a fictitious sascontainer. Note that shared access signatures work on any blob in any container. You don't need a special designated container.
With a SAS-based URI, it will be a valid URI until such time as the time expires (or you delete the blob). If you wanted more control, such as disabling a URI, you'd need to use a Shared Access Policy. Just something for you to consider looking into. And plenty of documentation on that, should you go that route.