Azure Blob Storage to host images / media - fetching with blob URL (without intermediary controller) - azure

In this article, the author provides a way to upload via a WebAPI controller. This makes sense to me.
He then recommends using an API Controller and a dedicated service method to deliver the blob:
public async Task<HttpResponseMessage> GetBlobDownload(int blobId)
{
// IMPORTANT: This must return HttpResponseMessage instead of IHttpActionResult
try
{
var result = await _service.DownloadBlob(blobId);
if (result == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
// Reset the stream position; otherwise, download will not work
result.BlobStream.Position = 0;
// Create response message with blob stream as its content
var message = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(result.BlobStream)
};
// Set content headers
message.Content.Headers.ContentLength = result.BlobLength;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(result.BlobContentType);
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = HttpUtility.UrlDecode(result.BlobFileName),
Size = result.BlobLength
};
return message;
}
catch (Exception ex)
{
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.InternalServerError,
Content = new StringContent(ex.Message)
};
}
}
My question is - why can't we just reference the blob URL directly after storing it in the database (instead of fetching via Blob ID)?
What's the benefit of fetching through a controller like this?

You can certainly deliver a blob directly, which then avoids using resources of your app tier (vm, app service, etc). Just note that, if blobs are private, you'd have to provide a special signed URI to the client app (e.g. adding a shared access signature) to allow this URI to be used publicly (for a temporary period of time). You'd generate the SAS within your app tier.
You'd still have all of your access control logic in your controller, to decide who has the rights to the object, for how long, etc. But you'd no longer need to stream the content through your app (consuming cpu, memory, & network resources). And you'd still be able to use https with direct storage access.

Quite simply, you can enforce access control centrally when you use a controller. You have way more control over who/what/why is accessing the file. You can also log requests pretty easily too.
Longer term, you might want to change the locations of your files, add a partitioning strategy for scalability, or do something else in your app that requires a change that you don't see right now. When you use a controller you can isolate the client code from all of those inevitable changes.

Related

How to store file into inetpub\wwwroot instead of local machine folder on UWP application

I am currently developing a UWP application for my school project and one of the pages allows the user to take a picture of themselves. I created the feature by following this tutorial: CameraStarterKit
For now I am storing the pictures taken on my desktop's picture folder. But the requirement of my project is to store the pictures taken in a folder called "Photos" under inetpub\wwwroot.
I dont really understand what wwwroot or IIS is... hence, I have no idea how I should modify my codes and store them into the folder.
Here are my codes for storing on my local desktop:
private async Task TakePhotoAsync()
{
idleTimer.Stop();
idleTimer.Start();
var stream = new InMemoryRandomAccessStream();
//MediaPlayer mediaPlayer = new MediaPlayer();
//mediaPlayer.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/camera-shutter-click-03.mp3"));
//mediaPlayer.Play();
Debug.WriteLine("Taking photo...");
await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), stream);
try
{
var file = await _captureFolder.CreateFileAsync("NYPVisitPhoto.jpg", CreationCollisionOption.GenerateUniqueName);
Debug.WriteLine("Photo taken! Saving to " + file.Path);
var photoOrientation = CameraRotationHelper.ConvertSimpleOrientationToPhotoOrientation(_rotationHelper.GetCameraCaptureOrientation());
await ReencodeAndSavePhotoAsync(stream, file, photoOrientation);
Debug.WriteLine("Photo saved!");
}
catch (Exception ex)
{
// File I/O errors are reported as exceptions
Debug.WriteLine("Exception when taking a photo: " + ex.ToString());
}
}
For the storing of the files:
private static async Task ReencodeAndSavePhotoAsync(IRandomAccessStream stream, StorageFile file, PhotoOrientation photoOrientation)
{
using (var inputStream = stream)
{
var decoder = await BitmapDecoder.CreateAsync(inputStream);
using (var outputStream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateForTranscodingAsync(outputStream, decoder);
var properties = new BitmapPropertySet { { "System.Photo.Orientation", new BitmapTypedValue(photoOrientation, PropertyType.UInt16) } };
await encoder.BitmapProperties.SetPropertiesAsync(properties);
await encoder.FlushAsync();
}
}
}
I would add an answer since there are tricky things about this requirement.
The first is the app can only access a few folders, inetpub is not one of them.
Using brokered Windows runtime component (I would suggest using FullTrustProcessLauncher, which is much simpler to develop and deploy) can enable UWP apps access folders in the same way as the traditional desktop applications do.
While this works for an ordinary folder, the inetpub folder, however, is different that it requires Administrators Privileges to write to, unless you turn UAC off.
The desktop component launched by the app does not have the adequate privileges to write to that folder, either.
So it think an alternative way would be setting up a virtual directory in IIS manager that maps to a folder in the public Pictures library, and the app saves picture to that folder.
From the website’s perspective, a virtual directory is the same as a real folder under inetpub, what differs is the access permissions.
Kennyzx is right here that you cannot access inetpub folder through your UWP application due to permissions.
But if your application fulfills following criteria then you can use Brokered Windows Component(a component within your app) to copy your file to any location in the system.
Your application is a LOB application
You are only targetting desktop devices(I assume this will be true because of your requirement)
You are using side-loading for your app installation and distribution.
If all three are Yes then use Brokered Windows Component for UWP, it's not a small thing that can be showed here on SO using an example. So give worth a try reading and implementing it.

Getting Azure InstanceInput endpoint port

I'm want my client to communicate with a specific WorkerRole instance, so I'm trying to use InstanceInput endpoints.
My project is based on the example provided in this question: Azure InstanceInput endpoint usage
The problem is that I don't get the external IP address + port for the actual instance, when using RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
I just get internal address with the local port (e.g. 10.x.x.x:10100). I know that I can get the public IP address via DNS lookup (xxx.cloudapp.net), but I don't have a glue how to get the correct public port for each instance.
One possible solution would be: get the instance number (from RoleEnvironment.CurrentRoleInstance.Id) and add this instance number to the FixedPortRange minimum (e.g. 10106). This would imply that the first instance will always have the port 10106, the second instance always 10107 and so on. This solution seems a bit hacky to me, since I don't know how Windows Azure assigns the instances to the ports.
Is there a better (correct) way to retrieve the public port for each instance?
Question #2:
Are there any information about the Azure Compute Emulator supporting InstanceInput endpoints? (As I already mentioned in the comments: It seems that the Azure Compute Emulator currently doesn't support InstanceInputEndpoint).
Second solution (much better):
To get the public port, the porperty PublicIPEndpoint can be used (I don't know why I didn't notice this property in the first place).
Usage: RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].PublicIPEndpoint;
Warning:
The IP address in the property is unused (http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleinstanceendpoint.publicipendpoint.aspx).
First solution:
As 'artfulmethod' already mentioned, the REST operation Get Deployment retrieves interesting information about the current deployment. Since I encountered some small annoying 'issues', I'll will provide the code for the REST client here (in case someone else is having a similiar problem):
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
certificateStore.Open(OpenFlags.ReadOnly);
string footPrint = "xxx"; // enter the footprint of the certificate you use to upload the deployment (aka Management Certificate)
X509Certificate2Collection certs =
certificateStore.Certificates.Find(X509FindType.FindByThumbprint, footPrint, false);
if (certs.Count != 1) {
// client certificate cannot be found - check footprint
}
string url = "https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>"; // replace <xxx> with actual values
try {
var request = (HttpWebRequest)WebRequest.Create(url);
request.ClientCertificates.Add(certs[0]);
request.Headers.Add("x-ms-version", "2012-03-01"); // very important, otherwise you get an HTTP 400 error, specifies in which version the response is formatted
request.Method = "GET";
var response = (HttpWebResponse)request.GetResponse(); // get response
string result = new StreamReader(response.GetResponseStream()).ReadToEnd() // get response body
} catch (Exception ex) {
// handle error
}
The string 'result' contains all the information about the deployment (format of the XML is described in section 'Response Body' # http://msdn.microsoft.com/en-us/library/windowsazure/ee460804.aspx)
To get information about your deployments, including the VIPs and public ports for your role instances, use the Get Deployment operation on the Service Management API. The response body includes an InstanceInputList.

How to get x-ms-request-id from Azure table storage api call

I getting slow behavior for my azure tablestorage api calls on a windows azure app.I need to get the request id (x-ms-request-id in the response header) for a particular call. Is there a way I can get it using the storageclient api? Does the storage client api even expose this id? If not, is there any other way to get this id?
I am using the api in the following way:
public UserDataModel GetUserData(String UserId)
{
UserDataModel osudm = null;
try
{
var result = (from c in GetServiceContext().OrgUserIdTable
where (c.RowKey == UserId)
select c).FirstOrDefault();
UserDataSource osuds = new UserDataSource(this.account);
osudm = osuds.GetUserData(result.PartitionKey, result.UserName);
}
catch (Exception e)
{
}
return osudm;
}
What you're asking here is more related to WCF Data Services than it is to Windows Azure (the storage client client API uses this). Here is some example code how you can access the response headers:
var tableContext = new MyTableServiceContext(...);
DataServiceQuery<Order> query = tableContext.Orders.Where(o => o.RowKey == "1382898382") as DataServiceQuery<Order>;
IEnumerable<Order> result = query.Execute();
QueryOperationResponse response = result as QueryOperationResponse;
string requestId;
response.Headers.TryGetValue("x-ms-request-id", out requestId);
So what you'll be doing first is simply create your query and cast it to a DataServiceQuery of TType. Then you can call the Execute method on that query and cast it to a QueryOperationResponse. This class will give you access to all headers, including the x-ms-request-id.
Note that in this case you won't be able to use FirstOrDefault, since this doesn't return an IQueryable and you can't cast it to a DataServiceQuery of TType (unless there's an other way to do it using WCF Data Services).
Note: The reason why the call is so slow might be caused by your query. When you query the OrgUserIdTable table, you only filter based on the RowKey. I don't know how much data or partitions you have in that table, but if you don't use the PartitionKey this might have a significant performance impact. You have to know that, by not including the PartitionKey, you'll force a search on all partitions (possibly over multiple servers) which might be causing the call being so slow.
I suggest you take a look at the following real world guidance to get a better insight on how and why partitioning relates to performance in Windows Azure Storage: Designing a Scalable Partitioning Strategy for Windows Azure Table Storage

azure reading mounted VHD

I am developing "azure web application".
I have created drive and drivePath static members in WebRole as follows:
public static CloudDrive drive = null;
public static string drivePath = "";
I have created development storage drive in WebRole.OnStart as follows:
LocalResource azureDriveCache = RoleEnvironment.GetLocalResource("cache");
CloudDrive.InitializeCache(azureDriveCache.RootPath, azureDriveCache.MaximumSizeInMegabytes);
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
// for a console app, reading from App.config
//configSetter(ConfigurationManager.AppSettings[configName]);
// OR, if running in the Windows Azure environment
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});
CloudStorageAccount account = CloudStorageAccount.DevelopmentStorageAccount;
CloudBlobClient blobClient = account.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();
drive = account.CreateCloudDrive(
blobClient
.GetContainerReference("drives")
.GetPageBlobReference("mysupercooldrive.vhd")
.Uri.ToString()
);
try
{
drive.Create(64);
}
catch (CloudDriveException ex)
{
// handle exception here
// exception is also thrown if all is well but the drive already exists
}
string path = drive.Mount(azureDriveCache.MaximumSizeInMegabytes, DriveMountOptions.None);
IDictionary<String, Uri> listDrives = Microsoft.WindowsAzure.StorageClient.CloudDrive.GetMountedDrives();
drivePath = path;
The drive keeps visible and accessible till execution scope remain in WebRole.OnStart, as soon as execution scope leave WebRole.OnStart, drive become unavailable from application and static members get reset (such as drivePath get set to "")
Am I missing some configuration or some other error ?
Where's the other code where you're expecting to use drivePath? Is it in a web application?
If so, are you using SDK 1.3? In SDK 1.3, the default mode for a web application is to run under full IIS, which means running in a separate app domain from your RoleEntryPoint code (like OnStart), so you can't share static variables across the two. If this is the problem, you might consider moving this initialization code to Application_Begin in Global.asax.cs instead (which is in the web application's app domain).
I found the solution:
In development machine, request originate for localhost, which was making the system to crash.
Commenting "Sites" tag in ServiceDefinition.csdef, resolves the issue.

Upload a file to SharePoint through the built-in web services

What is the best way to upload a file to a Document Library on a SharePoint server through the built-in web services that version WSS 3.0 exposes?
Following the two initial answers...
We definitely need to use the Web Service layer as we will be making these calls from remote client applications.
The WebDAV method would work for us, but we would prefer to be consistent with the web service integration method.
There is additionally a web service to upload files, painful but works all the time.
Are you referring to the “Copy” service?
We have been successful with this service’s CopyIntoItems method. Would this be the recommended way to upload a file to Document Libraries using only the WSS web service API?
I have posted our code as a suggested answer.
Example of using the WSS "Copy" Web service to upload a document to a library...
public static void UploadFile2007(string destinationUrl, byte[] fileData)
{
// List of desination Urls, Just one in this example.
string[] destinationUrls = { Uri.EscapeUriString(destinationUrl) };
// Empty Field Information. This can be populated but not for this example.
SharePoint2007CopyService.FieldInformation information = new
SharePoint2007CopyService.FieldInformation();
SharePoint2007CopyService.FieldInformation[] info = { information };
// To receive the result Xml.
SharePoint2007CopyService.CopyResult[] result;
// Create the Copy web service instance configured from the web.config file.
SharePoint2007CopyService.CopySoapClient
CopyService2007 = new CopySoapClient("CopySoap");
CopyService2007.ClientCredentials.Windows.ClientCredential =
CredentialCache.DefaultNetworkCredentials;
CopyService2007.ClientCredentials.Windows.AllowedImpersonationLevel =
System.Security.Principal.TokenImpersonationLevel.Delegation;
CopyService2007.CopyIntoItems(destinationUrl, destinationUrls, info, fileData, out result);
if (result[0].ErrorCode != SharePoint2007CopyService.CopyErrorCode.Success)
{
// ...
}
}
Another option is to use plain ol' HTTP PUT:
WebClient webclient = new WebClient();
webclient.Credentials = new NetworkCredential(_userName, _password, _domain);
webclient.UploadFile(remoteFileURL, "PUT", FilePath);
webclient.Dispose();
Where remoteFileURL points to your SharePoint document library...
There are a couple of things to consider:
Copy.CopyIntoItems needs the document to be already present at some server. The document is passed as a parameter of the webservice call, which will limit how large the document can be. (See http://social.msdn.microsoft.com/Forums/en-AU/sharepointdevelopment/thread/e4e00092-b312-4d4c-a0d2-1cfc2beb9a6c)
the 'http put' method (ie webdav...) will only put the document in the library, but not set field values
to update field values you can call Lists.UpdateListItem after the 'http put'
document libraries can have directories, you can make them with 'http mkcol'
you may want to check in files with Lists.CheckInFile
you can also create a custom webservice that uses the SPxxx .Net API, but that new webservice will have to be installed on the server. It could save trips to the server.
public static void UploadFile(byte[] fileData) {
var copy = new Copy {
Url = "http://servername/sitename/_vti_bin/copy.asmx",
UseDefaultCredentials = true
};
string destinationUrl = "http://servername/sitename/doclibrary/filename";
string[] destinationUrls = {destinationUrl};
var info1 = new FieldInformation
{
DisplayName = "Title",
InternalName = "Title",
Type = FieldType.Text,
Value = "New Title"
};
FieldInformation[] info = {info1};
var copyResult = new CopyResult();
CopyResult[] copyResults = {copyResult};
copy.CopyIntoItems(
destinationUrl, destinationUrls, info, fileData, out copyResults);
}
NOTE: Changing the 1st parameter of CopyIntoItems to the file name, Path.GetFileName(destinationUrl), makes the unlink message disappear.
I've had good luck using the DocLibHelper wrapper class described here: http://geek.hubkey.com/2007/10/upload-file-to-sharepoint-document.html
From a colleage at work:
Lazy way: your Windows WebDAV filesystem interface. It is bad as a programmatic solution because it relies on the WindowsClient service running on your OS, and also only works on websites running on port 80. Map a drive to the document library and get with the file copying.
There is additionally a web service to upload files, painful but works all the time.
I believe you are able to upload files via the FrontPage API but I don’t know of anyone who actually uses it.
Not sure on exactly which web service to use, but if you are in a position where you can use the SharePoint .NET API Dlls, then using the SPList and SPLibrary.Items.Add is really easy.

Resources