MobileService table store picture - azure

I want to save an image to my Azure MobileService.
I have been looking around and found that you can use blob and azure storage. But instead of implementing this I would love if you could convert an image to string or stream that could be stored in a normal azure mobile service table.
I am creating images in my app as :
Canvas found = null;
try
{
found = FindParentOfType<Canvas>(ViewInteractionCanvas.canvas);
}
catch (Exception)
{
//MessageBox.Show(e.Message.ToString(), "ERROR", MessageBoxButton.OK);
found = ViewInteractionCanvas.canvas;
}
WriteableBitmap writeableBitmap = new WriteableBitmap(found, null);
var imageBrush = new ImageBrush
{
ImageSource = writeableBitmap,
Stretch = Stretch.None
};
writeableBitmap = null;
GC.Collect();
try
{
FindChildCanvas(found, imageBrush);
}
catch (Exception e)
{
MessageBox.Show(e.Message.ToString(), AppResources.ErrorSaving, MessageBoxButton.OK);
return false;
}
var fileStream = new MemoryStream();
writeableBitmap = new WriteableBitmap(found, null);
writeableBitmap.SaveJpeg(fileStream, writeableBitmap.PixelWidth, writeableBitmap.PixelHeight, 100, 100);
fileStream.Seek(0, SeekOrigin.Begin);
string tempJPEG = "My.jpg";
using (IsolatedStorageFile myIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
if (myIsolatedStorage.FileExists(tempJPEG))
{
myIsolatedStorage.DeleteFile(tempJPEG);
}
IsolatedStorageFileStream IsofileStream = myIsolatedStorage.CreateFile(tempJPEG);
/*
StreamResourceInfo sri = null;
Uri uri = new Uri(tempJPEG, UriKind.Relative);
sri = Application.GetResourceStream(uri);
BitmapImage bitmap = new BitmapImage();
bitmap.SetSource(sri.Stream);
WriteableBitmap wb = new WriteableBitmap(bitmap);
*/
// Encode WriteableBitmap object to a JPEG stream.
//Extensions.SaveJpeg(wb, IsofileStream, wb.PixelWidth, wb.PixelHeight, 0, 85);
writeableBitmap.SaveJpeg(IsofileStream, writeableBitmap.PixelWidth, writeableBitmap.PixelHeight, 100, 100);
IsofileStream.Close();
}
dialogResult = MessageBox.Show(AppResources.ShieldCreator_SaveShield, AppResources.ShieldCreator_SaveShieldTitle, MessageBoxButton.OKCancel);
if (dialogResult == MessageBoxResult.OK)
{
MediaLibrary library = new MediaLibrary();
library.SavePictureToCameraRoll("picture", fileStream);
}
if (dialogResult == MessageBoxResult.Cancel)
{
}
fileStream.Close();
I was thinking that I could send the filestream or something like that? But Have not succeeded in doing so. Maybe this is completely impossible. But just wanted to investigate the possibility instead of starting to learn a new concept.
Hope somebody can help.

By default, Mobile Services data is backed by SQL Database. As long as you can find a way to create a proper data type in your table, you'd be able to do this. Just keep in mind: SQL Database databases are limited to 150GB, which will be eaten up faster if storing content in the database instance vs, say, blob storage with a URL to that blob being stored in your SQL table (which also costs significantly less than SQL Database service).

What you're talking about doing (storing the image data in your SQL database) is possible and not altogether difficult, but definitely not recommended. There are several issues including the size of the data and the inefficiency of storing data like this. At the end of the day though, if that's how you want to implement it, I have posts explaining how you can do so from an Android and an iOS app.:
Android: http://chrisrisner.com/Storing-Images-from-Android-in-Windows-Azure-Mobile-Services
iOS: http://chrisrisner.com/Storing-Images-from-iOS-in-Windows-Azure-Mobile-Services
Due to the data types supported by Mobile Services, you'd need to store the image data as strings (varchars in the database). Again, far from the most efficient but it'll work.

Related

.NET Core: Reading Azure Storage Blob into Memory Stream throws NotSupportedException in HttpBaseStream

I want to download a storage blob from Azure and stream it to a client via an .NET Web-App. The blob was uploaded correctly and is visible in my Azure storage account.
Surprisingly, the following throws an exception within HttpBaseStream:
[...]
var blobClient = _containerClient.GetBlobClient(Path.Combine(fileName));
var stream = await blobClient.OpenReadAsync();
return stream;
-> When i step further and return a File (return File(stream, MediaTypeNames.Application.Octet);), the download works as intended.
I tried to push the stream into an MemoryStream, which also fails with the same exception:
[...]
var blobClient = _containerClient.GetBlobClient(Path.Combine(fileName));
var stream = new MemoryStream();
await blobClient.DownloadToAsync(stream);
return stream
->When i step further, returning the file results in a timeout.
How can i fix that? Why do i get this exception - i followed the official quickstart guide from Microsoft.
the following throws an exception within HttpBaseStream
It looks like the HTTP result type is attempting to set the Content-Length header and is reading Length to do so. That would be the natural thing to do. However, it would also be natural to handle the NotSupportedException and just not set Content-Length at all.
If the NotSupportedException only shows up when running in the debugger, then just ignore it.
If the exception is actually thrown to your code (i.e., causing the request to fail), then you'll need to follow the rest of this answer.
First, create a minimal reproducible example and report a bug to the .NET team.
To work around this issue in the meantime, I recommend writing a stream wrapper that returns an already-determined length, which you can get from the Azure blob attributes. E.g.:
public sealed class KnownLengthStreamWrapper : Stream
{
private readonly Stream _stream;
public KnownLengthStreamWrapper(Stream stream, long length)
{
_stream = stream;
Length = length;
}
public override long Length { get; private set; }
... // override all other Stream members and forward to _stream.
}
That should be sufficient to get your app working.
I tried to push the stream into an MemoryStream
This didn't work because you'd need to "rewind" the MemoryStream at some point, e.g.:
var stream = new MemoryStream();
await blobClient.DownloadToAsync(stream);
stream.Position = 0;
return stream;
Check this sample of all the blob options which i have already posted on git working as expected. Reference
public void DownloadBlob(string path)
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient client = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("images");
CloudBlockBlob blockBlob = container.GetBlockBlobReference(Path.GetFileName(path));
using (MemoryStream ms = new MemoryStream())
{
blockBlob.DownloadToStream(ms);
HttpContext.Current.Response.ContentType = blockBlob.Properties.ContentType.ToString();
HttpContext.Current.Response.AddHeader("Content-Disposition", "Attachment; filename=" + Path.GetFileName(path).ToString());
HttpContext.Current.Response.AddHeader("Content-Length", blockBlob.Properties.Length.ToString());
HttpContext.Current.Response.BinaryWrite(ms.ToArray());
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Close();
}
}

Updating many json files in Azure blob storage?

I have about ~1,000,000 json files, that I would like to update every 30 minutes. The update is simply appending a new array to the end of the existing content.
A single update uses code similar to:
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
JObject jObject = null;
// If the blob exists, then we may need to update it.
if(blockBlob.Exists())
{
MemoryStream memoryStream = new MemoryStream();
blockBlob.DownloadToStream(memoryStream);
jObject = JsonConvert.DeserializeObject(Encoding.UTF8.GetString(memoryStream.ToArray())) as JObject;
} // End of the blob exists
if(null == jObject)
{
jObject = new JObject();
jObject.Add(new JProperty("identifier", identifier));
} // End of the blob did not exist
JArray jsonArray = new JArray();
jObject.Add(new JProperty(string.Format("entries{0}", timestamp.ToString()),jsonArray));
foreach(var entry in newEntries)
{
jsonArray.Add(new JObject(
new JProperty("someId", entry.id),
new JProperty("someValue", value)
)
);
} // End of loop
string jsonString = JsonConvert.SerializeObject(jObject);
// Upload
blockBlob.Properties.ContentType = "text/json";
blockBlob.UploadFromStream(new MemoryStream(Encoding.UTF8.GetBytes(jsonString)));
Basically:
Check if the blob exists,
If it does, download the data and create a json object from the existing details.
If it did not, then create a new object with the details.
Push the update to the blob.
The problem with this is performance. I've done quite a few things I can do to increase performance (the updates run in five parallel threads and I have set ServicePointManager.UseNagleAlgorithm to false.
It still runs slow though. Roughly ~100,000 updates can take up to an hour.
So I guess basically, my questions would be:
Should I be using Azure Blob storage for this? (I'm open to alternative suggestions).
If so, any suggestions on improving performance?
Note: The file basically contains a history of events and I cannot re-generate the entire file based on existing data. This is why the contents are downloaded before being updated.

NPOI writing XLS file converting to Azure Blob

I'm trying to convert current application that uses NPOI for creating xls document on the server to Azure hosted application. I have little experience with NPOI and Azure so 2 strikes right there. I have the app uploading the xls to Blob container however it is always blank (9 bytes). From what I understand NPOI uses filestream to write to the file so I just changed that to write to the blob container.
Here is what i think are the relevant portions:
internal void GenerateExcel(DataSet ds, int QuoteID, string ReportFileName)
{
string ExcelFileName = string.Format("{0}_{1}.xls",ReportFileName,QuoteID);
try
{
//these 2 strings will get deleted but left here for now to run side by side at the moment
string ReportDirectoryPath = HttpContext.Current.Server.MapPath(".") + "\\Reports";
if (!Directory.Exists(ReportDirectoryPath))
{
Directory.CreateDirectory(ReportDirectoryPath);
}
string ExcelReportFullPath = ReportDirectoryPath + "\\" + ExcelFileName;
if (File.Exists(ExcelReportFullPath))
{
File.Delete(ExcelReportFullPath);
}
// Create a new workbook.
var workbook = new HSSFWorkbook();
//Rest of the NPOI XLS rows cells etc. etc. all works fine when writing to disk////////////////
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("pricingappreports");
// Create the container if it doesn't already exist.
if (container.CreateIfNotExists())
{
container.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}
// Retrieve reference to a blob with the same name.
CloudBlockBlob blockBlob = container.GetBlockBlobReference(ExcelFileName);
// Write the output to a file on the server
String file = ExcelReportFullPath;
using (FileStream fs = new FileStream(file, FileMode.Create))
{
workbook.Write(fs);
fs.Close();
}
// Write the output to a file on Azure Storage
String Blobfile = ExcelFileName;
using (FileStream fs = new FileStream(Blobfile, FileMode.Create))
{
workbook.Write(fs);
blockBlob.UploadFromStream(fs);
fs.Close();
}
}
I'm uploading to the Blob and the file exists, why doesn't the data get written to the xls?
Any help would be appreciated.
Update: I think I found the problem. Doesn't look like you can write to a file in Blob Storage. Found this Blog which pretty much answers my questions: it doesn't use NPOI but the concept is the same. http://debugmode.net/2011/08/28/creating-and-updating-excel-file-in-windows-azure-web-role-using-open-xml-sdk/
Thanks
Can you install fiddler and check the request and the response packets? You may also need to seek back to 0 between two writes . So the correct code here could be to add the below before trying to write the stream to blob.
workbook.Write(fs);
fs.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(fs);
fs.Close();
I also noticed that you are using String Blobfile = ExcelFileName instead of String Blobfile = ExcelReportFullPath.

Can't access to Azure Storage folder using Lucene.Net

We decided to implement a search functionality in our API which is developed in ServiceStack, we decided to use Lucene.Net since we heard it was a great indexer to make searches.
We created a worker role whose job is to create the indexes in a Azure Storage folder, we guided ourselves using Leon Cullen's tutorial. We use the AzureDirectory library specified in that post, so we could use the latest Azure SDK.
Then in our API project we added the references for Lucene.Net and AzureDirectory too, our endpoint ended up looking like this:
public object Post(SearchIndex request)
{
List<Product> products = new List<Product>();
var pageSize = -1;
var totalpages = -1;
int.TryParse(ConfigurationManager.AppSettings["PageSize"], out pageSize);
if (request.Page.Equals(0))
{
request.Page = 1;
}
// Get Azure settings
AzureDirectory azureDirectory ;
try
{
// This is the line where we get the Access denied exception thrown at us
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
IndexSearcher searcher;
using (new AutoStopWatch("Creating searcher"))
{
searcher = new IndexSearcher(azureDirectory);
}
using (new AutoStopWatch(string.Format("Search for {0}", request.SearchString)))
{
string[] searchfields = new string[] { "Id", "Name", "Description" };
var hits = searcher.Search(QueryMaker(request.SearchString, searchfields), request.Page * pageSize);
int count = hits.ScoreDocs.Count();
float temp_totalpages = 0;
temp_totalpages = (float)hits.ScoreDocs.Count() / (float)pageSize;
if (temp_totalpages > (int)temp_totalpages)
{
totalpages = (int)temp_totalpages + 1;
}
else
{
totalpages = (int)temp_totalpages;
}
foreach (ScoreDoc match in hits.ScoreDocs)
{
Document doc = searcher.Doc(match.Doc);
int producId = int.Parse(doc.Get("Id"));
Product product = Db.Select<Product>("Id={0}", producId).FirstOrDefault();
products.Add(product);
}
}
return new SearchIndexResult { result = products.Skip((int)((request.Page - 1) * 10)).Take(pageSize).ToList(), PageSize = pageSize, TotalPages = totalpages };
}
catch (Exception e)
{
return new HttpResult(HttpStatusCode.NoContent, "azureDirectory. Parameter: " + request.SearchString + ". e: " + e.Message);
}
}
If we run this locally it works as expected, returning us the results we were expecting. But when we published our API to Azure and tried to access to the search endpoint we received an 403 error message with the message 'Access to the path "D:/AzureDirectory" is denied".
We're confused as to why is trying to access to such folder at all, the name of the folder is wrong and I think it's trying to access a local route, we really don't know why does it work fine locally but once it's deployed to Azure it stops working.
The worker role runs without a problems, but it's the API side that cannot access to the folder in Azure Storage. Are we missing some important step in the configuration? The tutorial we followed wasn't very clear for beginners using Lucene.Net or Azure Storage so we fear we might have missed an important step. We've checked our connection strings and everything seems ok though.
As for reference:
https://github.com/azure-contrib/AzureDirectory/blob/master/AzureDirectory/AzureDirectory.cs
when you do this
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
This executes
var cachePath = Path.Combine(Path.GetPathRoot(Environment.SystemDirectory), "AzureDirectory");
var azureDir = new DirectoryInfo(cachePath);
if (!azureDir.Exists)
azureDir.Create();
var catalogPath = Path.Combine(cachePath, _containerName);
var catalogDir = new DirectoryInfo(catalogPath);
if (!catalogDir.Exists)
catalogDir.Create();
_cacheDirectory = FSDirectory.Open(catalogPath);
So simple solution for you might be to have that directory on site root
DirectoryInfo info = new DirectoryInfo(HostingEnvironment.MapPath("~/"));
azureDirectory = new AzureDirectory(storageAccount, containerName, new SimpleFSDirectory(info), true);
I got it to work.
I just got the latest version of AzureDirectory from GitHub.
Got the latest nuGet packages for Azure Storage etc.
Recreated the index.
In addition to #brykneval answer, I tried his solution but last parameter bool compressBlob = false which he set to true made my local debug fail with 404 exception from AzureDirectory library and when I published to Azure web app, it had exception with message: System.IO.InvalidDataException: Block length does not match with its complement.
I removed last parameter from constructor and everything works like a charm. Hope this helps anyone.

Getting an error when uploading a file to Azure Storage

I'm converting a website from a standard ASP.NET website over to use Azure. The website had previously taken an Excel file uploaded by an administrative user and saved it on the file system. As part of the migration, I'm saving this file to Azure Storage. It works fine when running against my local storage through the Azure SDK. (I'm using version 1.3 since I didn't want to upgrade during the development process.)
When I point the code to run against Azure Storage itself, though, the process usually fails. The error I get is:
System.IO.IOException occurred
Message=Unable to read data from the transport connection: The connection was closed.
Source=Microsoft.WindowsAzure.StorageClient
StackTrace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options)
at Framework.Common.AzureBlobInteraction.UploadToBlob(Stream stream, String BlobContainerName, String fileName, String contentType) in C:\Development\RateSolution2010\Framework.Common\AzureBlobInteraction.cs:line 95
InnerException:
The code is as follows:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(storageAccount, dmc);
CloudBlobClient BlobClient = null;
CloudBlobContainer BlobContainer = null;
BlobClient = storageAccount.CreateCloudBlobClient();
// For large file copies you need to set up a custom timeout period
// and using parallel settings appears to spread the copy across multiple threads
// if you have big bandwidth you can increase the thread number below
// because Azure accepts blobs broken into blocks in any order of arrival.
BlobClient.Timeout = new System.TimeSpan(1, 0, 0);
Role serviceRole = RoleEnvironment.Roles.Where(s => s.Value.Name == "OnlineRates.Web").First().Value;
BlobClient.ParallelOperationThreadCount = serviceRole.Instances.Count;
// Get and create the container
BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
BlobContainer.CreateIfNotExist();
//delete prior version if one exists
BlobRequestOptions options = new BlobRequestOptions();
options.DeleteSnapshotsOption = DeleteSnapshotsOption.None;
CloudBlob blobToDelete = BlobContainer.GetBlobReference(fileName);
Trace.WriteLine("Blob " + fileName + " deleted to be replaced by newer version.");
blobToDelete.DeleteIfExists(options);
//set stream to starting position
stream.Position = 0;
long totalBytes = 0;
//Open the stream and read it back.
using (stream)
{
// Create the Blob and upload the file
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
try
{
BlobClient.ResponseReceived += new EventHandler<ResponseReceivedEventArgs>((obj, responseReceivedEventArgs)
=>
{
if (responseReceivedEventArgs.RequestUri.ToString().Contains("comp=block&blockid"))
{
totalBytes += Int64.Parse(responseReceivedEventArgs.RequestHeaders["Content-Length"]);
}
});
blob.UploadFromStream(stream);
// Set the metadata into the blob
blob.Metadata["FileName"] = fileName;
blob.SetMetadata();
// Set the properties
blob.Properties.ContentType = contentType;
blob.SetProperties();
}
catch (Exception exc)
{
Logging.ExceptionLogger.LogEx(exc);
}
}
}
I've tried a number of different alterations to the code: deleting a blob before replacing it (although the problem exists on new blobs as well), setting container permissions, not setting permissions, etc.
Your code looks like it should work, but it has lots of extra functionality that is not strictly required. I would cut it down to an absolute minimum and go from there. It's really only a gut feeling, but I think it might be the using statement giving you grief. This enture function could be written (presuming the container already exists) as:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
CloudBlobClient BlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
stream.Position = 0;
blob.UploadFromStream(stream);
}
Notes on the stuff that I've removed:
You should set up diagnostics just once when you're app starts, not every time a method is called. Usually in the RoleEntryPoint.OnStart()
I'm not sure why you're trying to set ParallelOperationThreadCount higher if you have more instances. Those two things seem unrelated.
It's not good form to check for the existence of a container/table every time you save something to it. It's more usual to do that check once when your app starts or to have a process external to the website to make sure all the required containers/tables/queues exist. Of course if you're trying to dynamically create containers this is not true.
The problem turned out to be firewall settings on my laptop. It's my personal laptop originally set up at home and so the firewall rules weren't set up for a corporate environment resulting in slow performance on uploads and downloads.

Resources