I'm trying to create time-limited URL's for smooth streaming of media stored in Azure Media Services.
I am working against the code supplied here.
Windows Azure Smooth Streaming example
I upload a video file to a new Asset. I encode that video file using Azure Media Service encoding with the preset "H264 Adaptive Bitrate MP4 Set 720p". With the resulting encoded asset, I then attempt to create a streaming URL by creating an Access Policy and then a Locator, which I use to generate the URL used for streaming.
Here is the code:
string urlForClientStreaming = "";
IAssetFile manifestFile = (from f in Asset.AssetFiles
where f.Name.EndsWith(".ism")
select f).FirstOrDefault();
if (manifestFile != null)
{
// Create a 1 hour readonly access policy.
IAccessPolicy policy = _mediaContext.AccessPolicies.Create("Streaming policy", TimeSpan.FromHours(1), AccessPermissions.Read);
// Create a locator to the streaming content on an origin.
ILocator originLocator = _mediaContext.Locators.CreateLocator(LocatorType.OnDemandOrigin, Asset, policy, DateTime.UtcNow.AddMinutes(-5));
urlForClientStreaming = originLocator.Path + manifestFile.Name + "/manifest";
if (contentType == MediaContentType.HLS)
urlForClientStreaming = String.Format("{0}{1}", urlForClientStreaming, "(format=m3u8-aapl)");
}
return urlForClientStreaming;
This works great. Until the 6th time you execute that code against the same Asset. Then you receive this error:
"Server does not support setting more than 5 shared access policy identifiers on a single container."
So, that's fine. I don't need to create a new AccessPolicy everytime, I can reuse the one I've created previously, build a Locator using that same policy. However, even then, I get the error about 5 shared access policies on a single container.
Here is the new code that creates the locator with the same AccessPolicy used previously:
string urlForClientStreaming = "";
IAssetFile manifestFile = (from f in Asset.AssetFiles
where f.Name.EndsWith(".ism")
select f).FirstOrDefault();
if (manifestFile != null)
{
// Create a 1 hour readonly access policy
IAccessPolicy accessPolicy = null;
accessPolicy =
(from p in _mediaContext.AccessPolicies where p.Name == "myaccesspolicy" select p).FirstOrDefault();
if (accessPolicy == null)
{
accessPolicy = _mediaContext.AccessPolicies.Create("myaccesspolicy", TimeSpan.FromHours(1), AccessPermissions.Read);
}
// Create a locator to the streaming content on an origin.
ILocator originLocator = _mediaContext.Locators.CreateLocator(LocatorType.OnDemandOrigin, Asset, policy, DateTime.UtcNow.AddMinutes(-5));
urlForClientStreaming = originLocator.Path + manifestFile.Name + "/manifest";
if (contentType == MediaContentType.HLS)
urlForClientStreaming = String.Format("{0}{1}", urlForClientStreaming, "(format=m3u8-aapl)");
}
return urlForClientStreaming;
I don't understand why it's saying I've created 5 shared access policies. In the case of the second block of code, I only ever create one access policy. I can verify there is only ever one AccessPolicy by viewing the content of _mediaContext.AccessPolicies, there is always just one access policy in that list.
At some point this will likely have many users requesting access to the same Asset. The URL's provided to these clients need to be time limited as per our clients requirements.
Is this not the appropriate means to create a URL for smooth streaming of an asset?
Late reply I know...
Given your requirement to create a single URL that can be used by anyone indefinitely, I would suggest that you:
Create a long lived locator when you create the asset, e.g. for a year - you can use the same access policy each time like you have in your second example
When you're building the URL for streaming, get that locator from the asset
Check the length of time left on the asset - if it's less than a certain amount of time (e.g. a month) then extend the locator by using the ILocator.Update, e.g. for another year. Updating the expiry date of the locator does not affect the original access policy that you used to create the locator.
Profit.
HTH
Now with Azure Media Services content protection feature, you could encrypt your media file with either AES or PlayReady, generate a long-lived locator. At the same time, you set Token-Authorization policy for the content key, the token duration could be set to a short-period of time (enough for the player to retrieve the content key). This way you could control your content access. For more information, you could refer to my blog: http://azure.microsoft.com/blog/2014/09/10/announcing-public-availability-of-azure-media-services-content-protection-services/
The locators were not designed to do per-user access control. Use a Digital Rights Management system for that. They have concepts of viewing windows, persistent and non-persistent licensing and much more. Specifically, I'm talking about using PlayReady encryption in WAMS and a PlayReady server to configure and provide the licenses (there is EzDRM in the Azure Portal, also BuyDRM and others).
Locators offer basic on-off switching of streaming services. You can create up to 5, because they are using the underlying SAS limitation of 5 per-container.
Related
It's not a good sign when the method I'm asking about, GetUserDelegationKey, yields zero search results on SO. Good luck, me.
I have a C# console app, .Net framework 4.8, using Azure.Storage.Blobs and Azure.Identity that will run on customer servers and access Azure blob storage to hold some stuff. I'm doing all of this with the library, not rolling my own REST. Built with VS2019, testing on Win10.
The plan is to use a single Azure storage account that I own, and create one Container per customer project with per-customer credentials that permit them only their own container. Projects never ever talk to each other.
I could set up credentials in the Azure portal by hand, but I am stubbornly trying to do this in software, where a simple project-management app connects as the project app's service principal (which I defined in Azure AD), creates the container, then creates the shared access signatures with a limited lifetime.
The storage account name / container name / access signature would then be configured on the customer server.
I'm having a terrible time.
Note: this is using the newer BlobClient mechanisms, not the older CloudBlob stuff. Dunno if that matters.
This is all documented here at Microsoft, and following even the simple example gets me the same failure.
using System;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Azure.Identity;
namespace Azure.Test
{
class Program
{
static void Main(string[] args)
{
var serviceClient = new BlobServiceClient(
new Uri("https://stevestorageacct.blob.core.windows.net"),
new DefaultAzureCredential(true)); // true=pop up login dlg
/*BOOM*/ UserDelegationKey key = serviceClient.GetUserDelegationKey(
DateTimeOffset.UtcNow,
DateTimeOffset.UtcNow.AddDays(30));
// use the key to create the signatures
}
}
}
Even though this program couldn't be simpler, it fails every time with an XML error calling GetUserDelegationKey
Unhandled Exception: Azure.RequestFailedException: The value for one of the XML nodes is not in the correct format.
RequestId:c9b7d324-401e-0127-4a4c-1fe6ce000000
Time:2020-05-01T00:06:21.3544489Z
Status: 400 (The value for one of the XML nodes is not in the correct format.)
ErrorCode: InvalidXmlNodeValue
The XML being sent is supposed to be super simple, I think just the start/end dates for validity, but I have no idea how to get to it to inspect, and http is forbidden for this kind of call, so no Wireshark.
It also fails the same way when I use my application's service principal:
static void Main(string[] args)
{
var tokenCredential = new ClientSecretCredential(
"xxxx-xxxx-xxxx-xxxxx", // tenant ID
"yyyy-yyyy-yyyy-yyyyy, // application ID
"**************"); // client secret
var serviceClient = new BlobServiceClient(
new Uri("https://stevestorageacct.blob.core.windows.net"),
tokenCredential);
UserDelegationKey key = serviceClient.GetUserDelegationKey(
DateTimeOffset.UtcNow,
DateTimeOffset.UtcNow.AddDays(30));
// ALSO: boom
I'm really at a loss.
I suppose I could try rolling my own REST and playing with it that way, but it doesn't feel like this should be necessary: this kind of error feels like a bug even if I'm doing something wrong. XML nodes?
Also open to entirely different ways of approaching this problem if they are superior, but would like to at least find out why this is failing.
I've had some issues with this also. The first things to try is removing the start time (pass null) or setting it ~15 minutes in the past. This is to avoid clock skew between the requesting pc and azure servers.
The second thing to verify is that the user that you are using has the "Storage Blob Data Contributor" role on the storage account. I had to grant it at the storage account level in the end otherwise it just refused to work for me. However in your use case it might be that you need to grant it at the container level to allow you to have one container per client.
Hope this helps.
I have created azure storage account. I have created file storage. I have generated SAS token. when I try to access file using sas token showing error "The remote server returned an error: (403) Forbidden."
I am able to generate SAS token. when I try to access file in file storage throwing exception. I have tried to copy and paste url on browser throws error "
<Error>
<Code>AuthorizationResourceTypeMismatch</Code>
<Message>
This request is not authorized to perform this operation using
this resource type. RequestId:4cbc0cbe-401a-00c2-2edf-
202bc4000000 Time:2019-06-12T05:26:39.4816687Z
</Message>
</Error>"
Code I am using to Generate SAS token
Static string GetAccountSASToken()
SharedAccessAccountPolicy policy = new
SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write |
SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.File,
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly,
};
Code I am using to access file
XDocument objdoc = XDocument.Load(filepath+ sasToken);
After loading file to XDocument I have to perform some read and write operations.Please help in finding mistake that I am doing
I was encountering the same problem, and the solution from user3404686 (2019-07-13) is correct. After the fact it's much clearer, but when it's still a problem without resolution it can be baffling.
Resource types are authorised independently of each other, rather than there being a hierarchy, ie 'service' does not include 'container' and 'object' authorisations (which was my misunderstanding).
The storageservices API documentation describes how resource type permissions are assigned:
Service (s): Access to service-level APIs (e.g., Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
Container (c): Access to container-level APIs (e.g., Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete
Share, List Blobs/Files and Directories)
Object (o): Access to object-level APIs for blobs, queue messages, table entities, and files(e.g. Put Blob, Query Entity, Get Messages,
Create File, etc.)
Further down in the same document, it provides examples of the service, resource type and permissions required for various operations that you may be using, allowing minimum-required-permissions granularity with regard to assigning permissions to a service using the SA token.
After understanding this, the error code AuthorizationResourceTypeMismatch makes more sense - the resource type(s) the SAS token is authorised for, mismatches the resource types you're attempting to access.
In SharedAccessAccountPolicy I have changed
ResourceTypes =SharedAccessAccountResourceTypes.Service to
ResourceTypes = SharedAccessAccountResourceTypes.Object. Then It's working for me.
I am using SAS token authentication along with device-ID (or publisher-Id) in my event Hub publisher code. But i see that it is possible to send an event to any partition ID by using "CreatePartitionedSender" client even though I have authenticated using a device-ID. Whereas I do not want two different device-Ids publishing events in same partition. Is it possible that we can add some custom "authorization" code along with the SAS authentication to allow limited partition access to any device.
The idea behind adding authorization to device and partition-Id combination was to accommodate single event-hub for multiple tenants. Please advise if I am missing anything.
Please see below the code snippet for publisher:
var publisherId = "1d8480fd-d1e7-48f9-9aa3-6e627bd38bae";
string token = SharedAccessSignatureTokenProvider.GetPublisherSharedAccessSignature(
new Uri("sb://anyhub-ns.servicebus.windows.net/"),
eventHubName, publisherId, "send",
sasKey,
new TimeSpan(0, 5, 0));
var factory = MessagingFactory.Create(ServiceBusEnvironment.CreateServiceUri("sb", "anyhub-ns", ""), new MessagingFactorySettings
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(token),
TransportType = TransportType.Amqp
});
var client = factory.CreateEventHubClient(String.Format("{0}/publishers/{1}", eventHubName, publisherId));
var message = "Event message for publisher: " + publisherId;
Console.WriteLine(message);
var eventData = new EventData(Encoding.UTF8.GetBytes(message));
await client.SendAsync(eventData);
await client.CreatePartitionedSender("5").SendAsync(eventData);
await client.CreatePartitionedSender("6").SendAsync(eventData);
I notice in your example code that you have
var connStr = ServiceBusConnectionStringBuilder.CreateUsingSharedAde...
and then have
CreateFromConnectionString(connectionString
This suggests that you may have used a Connection String containing the send key you used to generate the token rather than the limited access token. In my own tests I did not manage to connect to an EventHub using the EventHubClient, which makes an AMQP connection, with a publisher specific token. This doesn't mean it's not supported just that I got errors that made sense, and the ability to do so doesn't appear to be documented.
What is documented and has an example is making the publisher specific tokens and sending events to the EventHub using the HTTP interface. If you examine the SAS token generated you can see that the token grants access to
[namespace].servicebus.windows.net/[eventhubname]/publishers/[publisherId]
This is consistent with the documentation on the security model, and the general discussion of publisher policies in the overview. I would expect the guarantee on publisherId -> PartitionKey to hold with this interface. Thus each publisherId would have its events end up in a consistent partition.
This may be less than ideal for your multitenant system, but the code to send messages is arguably simpler and is a better match for the intended use case of per device keys. As discussed in this question you would need to do something rather dirty to get each publisher their own partition, and you would be outside the designed use cases.
Cross linking questions can be useful.
For a complete explanation on Event Hubs publisher policy refer this blog.
In short, If you want publisher policy - you will not get partitioned sender. Publisher policy is an extension to SAS security model, designed to support very high number of senders ( to a scale of million senders on event hub ).
With its current authentication model, you can not grant so fine-grained access to publishers. Authentication per partition is not currently supported as per Event Hubs Authentication and Security Model Overview.
You have to either "trust" your publishers, or think on different tenant scheme - i.e. Event Hub per tenant.
I have an MVC 4 website hosted in azure that needs to upload a video and on a different page allow that uploaded video to be streamed back to a client player.
The first option allows the user to upload and encode the video (.mp4)
The second option is I manually upload and encode the video and provide the url to the user.
In either case, the video would be presented to the users on another page.
Im having a devil of a time trying to get this to work. Any suggestions/working samples?
I used the below code for my c# application and it works properly.
public static string getBlobStreamURL(string fsBloblFilePath, string fsdirectory)
{
CloudBlobContainer cloudBlobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(fsdirectory);
var cloudBlob = cloudBlobContainer.GetBlockBlobReference(fsBloblFilePath);
var SharedAccessSignature = cloudBlob.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy()
{
Permissions = Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1)
});
var StreamURL = string.Format("{0}{1}", cloudBlob.Uri, lsSharedAccessSignature);
return StreamURL;
}
I've written a previous answer on how to serve video's from Azure.
You'll want to use Azure Blob Storage to store the files. This will scale very well, let you take advantage of the Azure CDN for faster delivery, and the outgoing traffic won't count against your website instances.
You can then use any HTML5 or flash player you want. One important thing, make sure when saving the file, you set the Content Type. You can also change the Azure Service Version to support video seek.
Apparently access to Azure queues can be controlled using a Shared Access Policy. The example here http://msdn.microsoft.com/en-us/library/windowsazure/hh508996 confirms this, but then only provides an example for a blob shared access policy. Does anyone have any references on how the same is achieved for Queues (and Tables)?
You can find the complete sample on the Windows Azure Storage blog: Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS. Here is the part that assigns the policy to a queue:
// Create the GC queue SAS policy.
QueuePermissions gcQueuePermissions = new QueuePermissions();
SharedAccessQueuePolicy gcQueuePolicy = new SharedAccessQueuePolicy()
{
// Providing the max duration
SharedAccessExpiryTime = DateTime.MaxValue,
// Permission is granted to process queue messages.
Permissions = SharedAccessQueuePermissions.ProcessMessages
};
// Associate the above policy with a signed identifier
gcQueuePermissions.SharedAccessPolicies.Add(
gcPolicySignedIdentifier,
gcQueuePolicy);
// The below call will result in a Set Queue ACL request to be sent to
// Windows Azure Storage in order to store the policy and associate it with the
// "GCAccessPolicy" signed identifier that will be referred to
// by the generated SAS token
this.gcQueue.SetPermissions(gcQueuePermissions);
Does the announcement post for table SAS (especially the SAS token producer portion of the sample) can help you any way? It seems to me the SAS is available on Table and Queues starting with SDK 1.7.