Streaming Videos from Azure ( Blob or Media Services) - azure

I have an MVC 4 website hosted in azure that needs to upload a video and on a different page allow that uploaded video to be streamed back to a client player.
The first option allows the user to upload and encode the video (.mp4)
The second option is I manually upload and encode the video and provide the url to the user.
In either case, the video would be presented to the users on another page.
Im having a devil of a time trying to get this to work. Any suggestions/working samples?

I used the below code for my c# application and it works properly.
public static string getBlobStreamURL(string fsBloblFilePath, string fsdirectory)
{
CloudBlobContainer cloudBlobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(fsdirectory);
var cloudBlob = cloudBlobContainer.GetBlockBlobReference(fsBloblFilePath);
var SharedAccessSignature = cloudBlob.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPolicy()
{
Permissions = Microsoft.WindowsAzure.Storage.Blob.SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1)
});
var StreamURL = string.Format("{0}{1}", cloudBlob.Uri, lsSharedAccessSignature);
return StreamURL;
}

I've written a previous answer on how to serve video's from Azure.
You'll want to use Azure Blob Storage to store the files. This will scale very well, let you take advantage of the Azure CDN for faster delivery, and the outgoing traffic won't count against your website instances.
You can then use any HTML5 or flash player you want. One important thing, make sure when saving the file, you set the Content Type. You can also change the Azure Service Version to support video seek.

Related

Can't implement azure web app service access to azure storage container (blob) using MSI

I have an azure resource group which contains Web App Service and Storage with BLOB container. My web app (.NET Core) tries to retrieve and show an image from container. The container has no public access to content (access level is private). I created system assigned identity for my app and gave it Reader role in storage access control (IAM).
This is how I get access to blobs in app's code:
const string blobName = "https://storagename.blob.core.windows.net/img/Coast.jpg";
string storageAccessToken = await GetStorageAccessTokenAsync();
var tokenCredential = new TokenCredential(storageAccessToken);
var storageCredentials = new StorageCredentials(tokenCredential);
var blob = new CloudBlockBlob(new Uri(blobName), storageCredentials);
ImageBlob = blob.Uri;
GetStorageAccessTokenAsync() does this:
var tokenProvider = new AzureServiceTokenProvider();
return await tokenProvider.GetAccessTokenAsync("https://storage.azure.com/");
Then the image is displayed by
<img src="#Model.ImageBlob" />
I don't get any exceptions in my code, but image from the BLOB container isn't shown with 404 error (specified resource doesn't exist) in browser console.
When I change container's access level to "blob" (public access), app works fine and the image is displayed.
Apparently, it is something wrong with getting credentials part, but I couldn't find any working example nor detailed explanations how it actually should work.
Any help is very appreciated.
UDPATE:
Thank you all who responded. So, it seems I've got two problems here.
1) I don't get credentials properly.
I can see that "AzureServiceTokenProvider" object (Microsoft.Azure.Services.AppAuthentication) that I create, has empty property PrincipalUsed at the runtime.
My application deployed to Azure App Service, which has system managed identity and that identity (service principal) is given permissions in Azure storage (I changed permission from account Reader to Storage Blob Data Reader as was suggested).
Shouldn't it get all data needed from the current context? If not, what I can do here?
2) I use wrong method to show image, but since the app has no access to storage anyway I can't fix it yet.
But still - what is the common way to do that in my case? I mean there is no public access to storage and I use "CloudBlockBlob" to reach images.
Reader gives access to read the control plane, but not the data plane. The role you need is Storage Blob Data Reader, which gives access to read blob contents.
For more details about this, check out: https://learn.microsoft.com/en-us/azure/role-based-access-control/role-definitions#data-operations-example
When you use <img src="#Model.ImageBlob" />, no authorization header is sent in the request by the browser. In your code, you are fetching the token, but the token is not being sent in the authorization header when the image is being fetched. So, storage API thinks this is an anonymous request. This is the reason you are getting a 404.
You need to send auth code when fetching the image. This code works for me
public async Task<ActionResult> Image()
{
const string blobName = "https://storage.blob.core.windows.net/images/image.png";
string storageAccessToken = await GetStorageAccessTokenAsync().ConfigureAwait(false);
var tokenCredential = new TokenCredential(storageAccessToken);
var storageCredentials = new StorageCredentials(tokenCredential);
var blob = new CloudBlockBlob(new Uri(blobName), storageCredentials);
Stream blobStream = blob.OpenRead();
return File(blobStream, blob.Properties.ContentType, "image.png");
}
In the view, I use
<img src="/Home/Image" />
Finally, I got it to work. First of all, the part of code regarding getting token and image from Azure storage was OK. The second problem with displaying image in RazorPages application I resolved, using this code in view:
<form asp-page-handler="GetImage" method="get">
<img src="/MyPageName?handler=GetImage" />
</form>
and corresponding code in model:
public async Task<ActionResult> OnGetGetImageAsync()
{
//getting image code and returning FileContentResult
}
But I'm still thinking: whether is more simple way to do that? Something like to add image collection to the model, fill it using "OnGet..." handler and then display its content using in view. I didn't find a way to use model properties in <img> tag. Does anyone have some suggestions?

How do I pass-through non-multipart file content from a client to Blob Storage without buffering it in a Web API app?

In my Web API, I need to receive files and then save them in Blob Sorage. Clients are not allowed to access and are not aware of the Blob Storage.
I'm trying to avoid buffering files, which could be up to 300 MB in size. I've seen this post...
How do I pass a Stream from a Web API to Azure Blob Storage without temp files?
... but the solution described in the post is not going to work for me because it assumes multipart content which in turn allows for custom providers.
Clients that I need to deal with are not sending files using multipart content. Instead, they simply send file content in message bodies.
Here is what works for me now (with buffering):
using (var inStream = await this.Request.Content.ReadAsStreamAsync())
{
var blob = container.GetBlockBlobReference(fileName);
var outStream = await blob.OpenWriteAsync();
await inStream.CopyToAsync(outStream);
outStream.Close();
}
Is there a way to connect a Request's Stream with a Blob Stream without the former being buffered?

Azure Shared Access Signatures (SAS)

I started using SAS today for the first time and I was intrigued by it but.. Would i really need it in my current project?
On my azure website, I want users to be able to upload and download blobs. Every user has their own account.
This is a simple upload function:
//Upload blob
CloudBlobContainer container = CloudStorageServices.GetCloudBlobsContainer();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(file.FileName + guid);
blockBlob.UploadFromStream(file.InputStream);
//Get uri from blob
var blobUrl = blockBlob.Uri;
//Upload table
CloudTable table2 = CloudStorageServices.GetCloudUploadsTable();
UploadEntity uploadtable = new UploadEntity(imagename, entity.RowKey + guid, blobUrl.ToString(), User.Identity.Name, file.FileName + guid);
TableOperation insertOperation = TableOperation.InsertOrReplace(uploadtable);
table2.Execute(insertOperation);
I dont really see the point in using SAS here, but i have feeling im totally wrong..
Why should I use it?
When user has uploaded a blob, the belonging blobs(uploaded by that user) will be listed and the user will ONLY be able to select and download their own items, Javascript download.
Same here.. Why would i need SAS?
Thanks!!
If you have a website or web service in front of your blobs then you don't need to use SAS since you can control the user permission by yourself. I used SAS in most cases that I cannot control the user permission.
For example, then we need to display images directly from the blob URL, I need to prevent them from being loaded and displayed by other websites. So I will generate SAS for the images needs to be displayed on that page and change the page html content to the SAS URL with few seconds. So if other wanted to read images by these SAS URLs they will be expired.
Another example might be upload. Different from your scenario if you need someone to upload directly to blob then you might need SAS since you should not give them your storage security keys.
Hope this helps a bit.

Upload file to Azure Blob Storage directly from browser?

Is it possible to create an html form to allow web users to upload files directly to azure blob store without using another server as a intermediary? S3 and GAW blobstore both allow this but I cant find any support for azure blob storage.
EDIT November 2019
You can now refer to the official documentation:
Azure Storage JavaScript Client Library Sample for Blob Operations
Azure Storage client library for JavaScript
Initial answer
There is a New Azure Storage JavaScript client library for browsers (Preview).
(Everything from this post comes from the original article above)
The JavaScript Client Library for Azure Storage enables many web development scenarios using storage services like Blob, Table, Queue, and File, and is compatible with modern browsers
The new JavaScript Client Library for Browsers supports all the storage features available in the latest REST API version 2016-05-31 since it is built with Browserify using the Azure Storage Client Library for Node.js
We highly recommend use of SAS tokens to authenticate with Azure Storage since the JavaScript Client Library will expose the authentication token to the user in the browser. A SAS token with limited scope and time is highly recommended. In an ideal web application it is expected that the backend application will authenticate users when they log on, and will then provide a SAS token to the client for authorizing access to the Storage account. This removes the need to authenticate using an account key. Check out the Azure Function sample in our Github repository that generates a SAS token upon an HTTP POST request.
Code sample:
Insert the following script tags in your HTML code. Make sure the JavaScript files located in the same folder.
<script src="azure-storage.common.js"></script/>
<script src="azure-storage.blob.js"></script/>
Let’s now add a few items to the page to initiate the transfer. Add the following tags inside the BODY tag. Notice that the button calls uploadBlobFromText method when clicked. We will define this method in the next step.
<input type="text" id="text" name="text" value="Hello World!" />
<button id="upload-button" onclick="uploadBlobFromText()">Upload</button>
So far, we have included the client library and added the HTML code to show the user a text input and a button to initiate the transfer. When the user clicks on the upload button, uploadBlobFromText will be called. Let’s define that now:
<script>
function uploadBlobFromText() {
// your account and SAS information
var sasKey ="....";
var blobUri = "http://<accountname>.blob.core.windows.net";
var blobService = AzureStorage.createBlobServiceWithSas(blobUri, sasKey).withFilter(new AzureStorage.ExponentialRetryPolicyFilter());
var text = document.getElementById('text');
var btn = document.getElementById("upload-button");
blobService.createBlockBlobFromText('mycontainer', 'myblob', text.value, function(error, result, response){
if (error) {
alert('Upload filed, open browser console for more detailed info.');
console.log(error);
} else {
alert('Upload successfully!');
}
});
}
</script>
Do take a look at these blog posts for uploading files directly from browser to blob storage:
http://coderead.wordpress.com/2012/11/21/uploading-files-directly-to-blob-storage-from-the-browser/
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript
The 2nd post (written by me) makes use of HTML 5 File API and thus would not work in all browsers.
The basic idea is to create a Shared Access Signature (SAS) for a blob container. The SAS should have Write permission. Since Windows Azure Blob Storage does not support CORS yet (which is supported by both Amazon S3 and Google), you would need to host the HTML page in the blob storage where you want your users to upload the file. Then you can use jQuery's Ajax functionality.
Now that Windows Azure storage services support CORS, you can do this. You can see the announcement here: Windows Azure Storage Release - Introducing CORS, JSON, Minute Metrics, and More.
I have a simple example that illustrates this scenario here: http://www.contentmaster.com/azure/windows-azure-storage-cors/
The example shows how to upload and download directly from a private blob using jQuery.ajax. This example still requires a server component to generate the shared access signature: this avoids the need to expose the storage account key in the client code.
You can use HTML5 File API, AJAX and MVC 3 to build a robust file upload control to upload huge files securely and reliably to Windows Azure blob storage with a provision of monitoring operation progress and operation cancellation. The solution works as below:
Client-side JavaScript that accepts and processes a file uploaded by user.
Server-side code that processes file chunks sent by JavaScript.
Client-side UI that invokes JavaScript.
Get the sample code here: Reliable Uploads to Windows Azure Blob Storage via an HTML5 Control
I have written a blog post with an example on how to do this, the code is at GitHub
It is based on Gaurav Mantris post and works by hosting the JavaScript on the Blob Storage itself.
Configure a proper CORS rule on your storage account.
Generate a Shared Access Signature from your target container.
Install the blob storage SDK: npm install #azure/storage-blob.
Assuming your file is Blob/Buffer/BufferArray, you can do something like this in your code:
import { ContainerClient } from "#azure/storage-blob";
const account = "your storage account name";
const container = "your container name";
const sas = "your shared access signature";
const containerClient = new ContainerClient(
`https://${account}.blob.core.windows.net/${container}${sas}`
);
async function upload(fileName, file) {
const blockBlobClient = containerClient.getBlockBlobClient(fileName);
const result = await blockBlobClient.uploadData(file);
console.log("uploaded", result);
}

Azure Media Services Shared Access Policy limitations

I'm trying to create time-limited URL's for smooth streaming of media stored in Azure Media Services.
I am working against the code supplied here.
Windows Azure Smooth Streaming example
I upload a video file to a new Asset. I encode that video file using Azure Media Service encoding with the preset "H264 Adaptive Bitrate MP4 Set 720p". With the resulting encoded asset, I then attempt to create a streaming URL by creating an Access Policy and then a Locator, which I use to generate the URL used for streaming.
Here is the code:
string urlForClientStreaming = "";
IAssetFile manifestFile = (from f in Asset.AssetFiles
where f.Name.EndsWith(".ism")
select f).FirstOrDefault();
if (manifestFile != null)
{
// Create a 1 hour readonly access policy.
IAccessPolicy policy = _mediaContext.AccessPolicies.Create("Streaming policy", TimeSpan.FromHours(1), AccessPermissions.Read);
// Create a locator to the streaming content on an origin.
ILocator originLocator = _mediaContext.Locators.CreateLocator(LocatorType.OnDemandOrigin, Asset, policy, DateTime.UtcNow.AddMinutes(-5));
urlForClientStreaming = originLocator.Path + manifestFile.Name + "/manifest";
if (contentType == MediaContentType.HLS)
urlForClientStreaming = String.Format("{0}{1}", urlForClientStreaming, "(format=m3u8-aapl)");
}
return urlForClientStreaming;
This works great. Until the 6th time you execute that code against the same Asset. Then you receive this error:
"Server does not support setting more than 5 shared access policy identifiers on a single container."
So, that's fine. I don't need to create a new AccessPolicy everytime, I can reuse the one I've created previously, build a Locator using that same policy. However, even then, I get the error about 5 shared access policies on a single container.
Here is the new code that creates the locator with the same AccessPolicy used previously:
string urlForClientStreaming = "";
IAssetFile manifestFile = (from f in Asset.AssetFiles
where f.Name.EndsWith(".ism")
select f).FirstOrDefault();
if (manifestFile != null)
{
// Create a 1 hour readonly access policy
IAccessPolicy accessPolicy = null;
accessPolicy =
(from p in _mediaContext.AccessPolicies where p.Name == "myaccesspolicy" select p).FirstOrDefault();
if (accessPolicy == null)
{
accessPolicy = _mediaContext.AccessPolicies.Create("myaccesspolicy", TimeSpan.FromHours(1), AccessPermissions.Read);
}
// Create a locator to the streaming content on an origin.
ILocator originLocator = _mediaContext.Locators.CreateLocator(LocatorType.OnDemandOrigin, Asset, policy, DateTime.UtcNow.AddMinutes(-5));
urlForClientStreaming = originLocator.Path + manifestFile.Name + "/manifest";
if (contentType == MediaContentType.HLS)
urlForClientStreaming = String.Format("{0}{1}", urlForClientStreaming, "(format=m3u8-aapl)");
}
return urlForClientStreaming;
I don't understand why it's saying I've created 5 shared access policies. In the case of the second block of code, I only ever create one access policy. I can verify there is only ever one AccessPolicy by viewing the content of _mediaContext.AccessPolicies, there is always just one access policy in that list.
At some point this will likely have many users requesting access to the same Asset. The URL's provided to these clients need to be time limited as per our clients requirements.
Is this not the appropriate means to create a URL for smooth streaming of an asset?
Late reply I know...
Given your requirement to create a single URL that can be used by anyone indefinitely, I would suggest that you:
Create a long lived locator when you create the asset, e.g. for a year - you can use the same access policy each time like you have in your second example
When you're building the URL for streaming, get that locator from the asset
Check the length of time left on the asset - if it's less than a certain amount of time (e.g. a month) then extend the locator by using the ILocator.Update, e.g. for another year. Updating the expiry date of the locator does not affect the original access policy that you used to create the locator.
Profit.
HTH
Now with Azure Media Services content protection feature, you could encrypt your media file with either AES or PlayReady, generate a long-lived locator. At the same time, you set Token-Authorization policy for the content key, the token duration could be set to a short-period of time (enough for the player to retrieve the content key). This way you could control your content access. For more information, you could refer to my blog: http://azure.microsoft.com/blog/2014/09/10/announcing-public-availability-of-azure-media-services-content-protection-services/
The locators were not designed to do per-user access control. Use a Digital Rights Management system for that. They have concepts of viewing windows, persistent and non-persistent licensing and much more. Specifically, I'm talking about using PlayReady encryption in WAMS and a PlayReady server to configure and provide the licenses (there is EzDRM in the Azure Portal, also BuyDRM and others).
Locators offer basic on-off switching of streaming services. You can create up to 5, because they are using the underlying SAS limitation of 5 per-container.

Resources