Stream MP3 files with Microsoft Azure - azure

Is it possible to stream MP3 or WAV files with Microsoft Azure?
I want to have file which can be played with any js-player including to start the playback from any point the users wants (like the soundcloud player,...).
I tried to use the Blob Storage for that, but thats not possible because it does support streaming so the file has to be downloaded completely to jump to a certain point of the song.
Is there a way to make this possible with the Blob Storage? Or do I have to user Azure Media Services (tried that but only found support for video)?

The problem here is the service version of the storage. I had to set it manually via a Java program to a higher version (2014-02-14).
Here is the code. You need the azure SDK (https://github.com/Azure/azure-sdk-for-java) and the slf4j, lang3 and fasterxml libs.
import com.microsoft.azure.storage.*;
import com.microsoft.azure.storage.CloudStorageAccount;
import com.microsoft.azure.storage.blob.CloudBlobClient;
import com.microsoft.azure.storage.ServiceProperties;
public class storage {
public static void main(String[] args) {
CloudStorageAccount storageAccount;
ServiceProperties serviceProperties;
try {
storageAccount = CloudStorageAccount.parse("AccountName=<storageName>;AccountKey=<storageAccessKey>;DefaultEndpointsProtocol=http");
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
// Get the current service properties
serviceProperties = blobClient.downloadServiceProperties();
// Set the default service version to 2011-08-18 (or a higher version like 2012-03-01)
serviceProperties.setDefaultServiceVersion("2014-02-14");
// Save the updated service properties
blobClient.uploadServiceProperties(serviceProperties);
} catch(Exception e) {
System.out.print("Exception");
}
};
};

Related

C# Azure Storage download issue

I'm working on moving our files from the database to Azure Storage for the files themselves. We are keeping the folder structure and file information in our SQL DB.
The control we are using is an ASPxFileManager from Dev Express which does not allow you do to async functionality because it isn't built in.
The code I'm using to pull the blob from Azure and return a stream is below:
using (var ms = new MemoryStream())
{
var result = blobItem.DownloadStreaming().GetRawResponse();
result.ContentStream.CopyTo(ms);
return ms.ToArray();
}
It appears on large files, it is downloading it to the server to get the stream AND then sending that stream to the client. So I think it is processing twice.
BlobContainerClient container = GetBlobContainer(containerInfo);
BlobClient blobItem = container.GetBlobClient(fileSystemFileDataId);
using (var ms = new MemoryStream())
{
blobItem.DownloadTo(ms);
return ms.ToArray();
}
I looked at using their CloudFileSystemProviderBase to use GetDownloadURL, but it only allows you to download single files. This works fine for single files as we can return a url with a SAS token etc.
We still need to support downloading multiple files though.
Is there a way in azure to NOT download to the file system and just obtain ONE stream to send back for the Dev Express ASPxFileManager to process?
I liked the GetDownloadUrl call from their CloudFileSystemProviderBase, because it doesn't block the main UI thread and allows the user to continue to work in the app while large files are downloading.
Main question: Is there a way to return a stream from azure where it does not have to download to the server first?
(Note: I've already been talking to DevExpress about this issue)
UPDATE 2:
The code below obtains it to a stream, but does it download it on the server and then sends it to the client? Or just does the obtain the stream once? I think the code above I was using does it twice?
Also, this code uses the WindowsAzure.Storage, which is depecated.
So what would be the nuget/C# code for the correct way now days?
if (!String.IsNullOrWhiteSpace(contentDisposition.FileName))
{
string connectionString = ConfigurationManager.AppSettings["my-connection-string"];
string containerName = ConfigurationManager.AppSettings["my-container"];
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference(containerName);
CloudBlockBlob blob = blobContainer.GetBlockBlobReference(contentDisposition.FileName);
stream = blob.OpenWrite();
}

Azure Blob Storage to host images / media - fetching with blob URL (without intermediary controller)

In this article, the author provides a way to upload via a WebAPI controller. This makes sense to me.
He then recommends using an API Controller and a dedicated service method to deliver the blob:
public async Task<HttpResponseMessage> GetBlobDownload(int blobId)
{
// IMPORTANT: This must return HttpResponseMessage instead of IHttpActionResult
try
{
var result = await _service.DownloadBlob(blobId);
if (result == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
// Reset the stream position; otherwise, download will not work
result.BlobStream.Position = 0;
// Create response message with blob stream as its content
var message = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(result.BlobStream)
};
// Set content headers
message.Content.Headers.ContentLength = result.BlobLength;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(result.BlobContentType);
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = HttpUtility.UrlDecode(result.BlobFileName),
Size = result.BlobLength
};
return message;
}
catch (Exception ex)
{
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.InternalServerError,
Content = new StringContent(ex.Message)
};
}
}
My question is - why can't we just reference the blob URL directly after storing it in the database (instead of fetching via Blob ID)?
What's the benefit of fetching through a controller like this?
You can certainly deliver a blob directly, which then avoids using resources of your app tier (vm, app service, etc). Just note that, if blobs are private, you'd have to provide a special signed URI to the client app (e.g. adding a shared access signature) to allow this URI to be used publicly (for a temporary period of time). You'd generate the SAS within your app tier.
You'd still have all of your access control logic in your controller, to decide who has the rights to the object, for how long, etc. But you'd no longer need to stream the content through your app (consuming cpu, memory, & network resources). And you'd still be able to use https with direct storage access.
Quite simply, you can enforce access control centrally when you use a controller. You have way more control over who/what/why is accessing the file. You can also log requests pretty easily too.
Longer term, you might want to change the locations of your files, add a partitioning strategy for scalability, or do something else in your app that requires a change that you don't see right now. When you use a controller you can isolate the client code from all of those inevitable changes.

Lucene.NET and storing data on Azure Blob Storage

The question I am asking is specifically because I don't want to use AzureDirectory project. I am just trying something on my own.
cloudStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=xxxx;AccountKey=xxxxx");
blobClient=cloudStorageAccount.CreateCloudBlobClient();
List<CloudBlobContainer> containerList = new List<CloudBlobContainer>();
IEnumerable<CloudBlobContainer> containers = blobClient.ListContainers();
if (containers != null)
{
foreach (var item in containers)
{
Console.WriteLine(item.Uri);
}
}
/* Used to test connectivity
*/
//state the file location of the index
string indexLocation = containers.Last().Name.ToString();
Lucene.Net.Store.Directory dir =
Lucene.Net.Store.FSDirectory.Open(indexLocation);
//create an analyzer to process the text
Lucene.Net.Analysis.Analyzer analyzer = new
Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30);
//create the index writer with the directory and analyzer defined.
bool findexExists = Lucene.Net.Index.IndexReader.IndexExists(dir);
Lucene.Net.Index.IndexWriter indexWritr = new Lucene.Net.Index.IndexWriter(dir, analyzer,!findexExists, Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED);
//create a document, add in a single field
Lucene.Net.Documents.Document doc = new Lucene.Net.Documents.Document();
string path="D:\\try.html";
TextReader reader = new FilterReader("D:\\try.html");
doc.Add(new Lucene.Net.Documents.Field("url",path,Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.NOT_ANALYZED));
doc.Add(new Lucene.Net.Documents.Field("content",reader.ReadToEnd().ToString(),Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.ANALYZED));
indexWritr.AddDocument(doc);
indexWritr.Optimize();
indexWritr.Commit();
indexWritr.Close();
Now the issue is after indexing is completed I am not able to see any files created inside the container. Can anybody help me out?
You're using the FSDirectory there, which is going to write files to the local disk.
You're passing it a list of containers in blob storage. Blob storage is a service made available over a REST API, and is not addressable directly from the file system. Therefore the FSDirectory is not going to be able to write your index to storage.
Your options are :
Mount a VHD disk on the machine, and store the VHD in blob storage. There are some instructions on how to do this here: http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/15/mount-a-page-blob-vhd-in-any-windows-azure-vm-outside-any-web-worker-or-vm-role.aspx
Use the Azure Directory, which you refer to in your question. I have rebuilt the AzureDirectory against the latest storage SDK: https://github.com/richorama/AzureDirectory
Another alternative for people looking around - I wrote up a directory that uses the azure shared cache (preview) which can be an alternative for AzureDirectory (albeit for bounded search sets)
https://github.com/ajorkowski/AzureDataCacheDirectory

windows azure blob leasing in sdk1.4

I have been using the following code which I wrote after consulting the following thread - Use blob-leasing feature in the Azure cloud app
public static void UploadFromStreamWithLease(CloudBlob blob, Stream src, string leaseID)
{
string url = blob.Uri.ToString();
if (blob.ServiceClient.Credentials.NeedsTransformUri)
{
url = blob.ServiceClient.Credentials.TransformUri(url);
}
HttpWebRequest req = BlobRequest.Put(new Uri(url), 90, blob.Properties, BlobType.BlockBlob, leaseID, 0);
BlobRequest.AddMetadata(req, blob.Metadata);
using (var writer = new StreamWriter(req.GetRequestStream()))
{
byte[] content = new byte[src.Length];
writer.Write(readFully(src));
}
blob.ServiceClient.Credentials.SignRequest(req);
req.GetResponse().Close();
}
The readFully() method above simply gets the content from the stream to a byte[] array.
I have been using this code to upload some stuff to any blob that has a valid leaseId. This was working fine until I moved to version 1.4 of the Azure SDK. In the new version of the azure sdk, I get an error 400 in req.GetResponse() method.
Can someone please point out what has changed in azure sdk 1.4 that's screwing this up?
Thanks
Kapil
The 400 code means "bad request" there should be some additional error message, see http://paulsomers.blogspot.com/2010/10/azure-error-400-bad-request.html for some examples. You should try debugging or sniffing the network to get the error message.
There were some bugs for downloading blobs in version 1.4, but they may not affect you. However, you should upgrade to latest version.

azure reading mounted VHD

I am developing "azure web application".
I have created drive and drivePath static members in WebRole as follows:
public static CloudDrive drive = null;
public static string drivePath = "";
I have created development storage drive in WebRole.OnStart as follows:
LocalResource azureDriveCache = RoleEnvironment.GetLocalResource("cache");
CloudDrive.InitializeCache(azureDriveCache.RootPath, azureDriveCache.MaximumSizeInMegabytes);
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
// for a console app, reading from App.config
//configSetter(ConfigurationManager.AppSettings[configName]);
// OR, if running in the Windows Azure environment
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});
CloudStorageAccount account = CloudStorageAccount.DevelopmentStorageAccount;
CloudBlobClient blobClient = account.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();
drive = account.CreateCloudDrive(
blobClient
.GetContainerReference("drives")
.GetPageBlobReference("mysupercooldrive.vhd")
.Uri.ToString()
);
try
{
drive.Create(64);
}
catch (CloudDriveException ex)
{
// handle exception here
// exception is also thrown if all is well but the drive already exists
}
string path = drive.Mount(azureDriveCache.MaximumSizeInMegabytes, DriveMountOptions.None);
IDictionary<String, Uri> listDrives = Microsoft.WindowsAzure.StorageClient.CloudDrive.GetMountedDrives();
drivePath = path;
The drive keeps visible and accessible till execution scope remain in WebRole.OnStart, as soon as execution scope leave WebRole.OnStart, drive become unavailable from application and static members get reset (such as drivePath get set to "")
Am I missing some configuration or some other error ?
Where's the other code where you're expecting to use drivePath? Is it in a web application?
If so, are you using SDK 1.3? In SDK 1.3, the default mode for a web application is to run under full IIS, which means running in a separate app domain from your RoleEntryPoint code (like OnStart), so you can't share static variables across the two. If this is the problem, you might consider moving this initialization code to Application_Begin in Global.asax.cs instead (which is in the web application's app domain).
I found the solution:
In development machine, request originate for localhost, which was making the system to crash.
Commenting "Sites" tag in ServiceDefinition.csdef, resolves the issue.

Resources