getting blob Uri without connecting to Azure - azure

I've created a file system abstraction, where I store files with a relative path, e.g /uploads/images/img1.jpg.
These can then be saved both on local file system (relative to folder), or Azure. Then, I can also ask a method to give me the url to access that relative path.
In Azure, currently this is being done similar to the below:
public string GetWebPathForRelativePathOnUserContentStorage(string relativeFileFullPath)
{
var container = getCloudBlobContainer();
CloudBlockBlob blob = container.GetBlockBlobReference(relativeFileFullPath);
return blob.Uri.ToString();
}
On a normal website, there might be say 40 images in one page - So this get's called like 40 times. Is this first of all slow? I've noticed there is a particular pattern in the generated URL:
https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?

On a normal website, there might be say 40 images in one page - So
this get's called like 40 times. Is this first of all slow?
Not at all. The code you wrote above does not make any calls to storage. It just creates an instance of CloudBlockBlob object. If you were using GetBlockBlobReferenceFromServer method, then it would have been a different story because that method makes a call to storage.
`I've noticed there is a particular pattern in the generated URL:
_https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
Absolutely yes. Assuming you're using just standard stuff that would be perfectly fine. Non standard stuff would include things like using a custom domain for your blob storage or connecting to geo-secondary location of your storage account.

Related

Azure blob storage file being accessed by multiple azure nodes

I have multiple JSON format files which is being pushed to the Azure storage account under a specific container. There are n number of files in the container.
And 4 to 8 nodes which will be accessing the Azure storage container to downloaded the files locally, the download code is written in java.
Since there are n number of files and multiple file accessing the container at the same time, how to avoid the situation that the same file is downloaded by another server?
Example:
Azure container has 1.json, 2.json, 3.json, etc which are > 35 MB size.
batch-process-node1 -> starts downloading 1.json
batch-process-node2 -> starts downloading 2.json
batch-process-node3 -> should not start downloading the 1.json
Is there any logic to be built for each node which has the java process to download the file uniquely?
Is there any setting that can be set in the Azure storage container?
--
Trying to use the Camel Azure-bolb component, using the block blob (blobType).
New to Azure storage blob, any help is appreciated.
Since we are already using Apache camel in the code, we tried to use camel azure-blob component to address the issue. Below is the approach we used, still the race condition is acceptable for our scenario.
Camel route started with timer consumer, and producer to get the list of blob from container using below endpoint,
azure-blob://<account>/<container>?credentials=#storagecredentials&blobType=blockBlob&operation=listBlobs
Note: storagecredential is a bean of type StorageCredentialsAccountAndKey class.
Created a java class implementing the Processor of camel, and in process() method, using the exchange.getIn().getBody() => which provides an iterable object with has ListBlobItem.
first i set the meta data of the blob using below endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=updateBlockBlob&blobMetadata=#blobMetaData1
Note: blobMetaData1 is bean created in the context file.
<util:map id="blobMetaData1" map-class="java.util.HashMap">
<entry key="someKey" value="someValue"/>
</util:map>
Key thing: In this class process method
validate the metadata is being set or not, if set then the
process is already picked the blob. so it won't be picked again
assuming if the process executed in different server.
got the blob name from the ListBlobItem individual blob item.
using getURI() and forming the endpoint within this processor class.
in order to invoke the custom endpoint, used to set it an customer
header value of In message.
using the recipientList camel option which invokes the metadata endpoint to update the specific blob.
Then used another processor to form the download blob endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=getBlob
and using the recipientList to get the processor endpoint from message header.
finally forming another delete endpoint which will delete once its downloaded.

IFileProvider Azure File storage

I am thinking about implementing IFileProvider interface with Azure File Storage.
What i am trying to find in docs is if there is a way to send the whole path to the file to Azure API like rootDirectory/sub1/sub2/example.file or should that actually be mapped to some recursion function that would take path and traverse directories structure on file storage?
just want to make sure i am not missing something and reinvent the wheel for something that already exists.
[UPDATE]
I'm using Azure Storage Client for .NET. I would not like to mount anything.
My intentention is to have several IFileProviders which i could switch based on Environment and other conditions.
So, for example, if my environment is Cloud then i would use IFileProvider implementation that uses Azure File Services through Azure Storage Client. Next, if i have environment MyServer then i would use servers local file system. Third option would be environment someOther with that particular implementation.
Now, for all of them, IFileProvider operates with path like root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to get sub3 info/content or should the path be broken into individual directories and get reference/content for each step?
I hope that clears the question.
Now, for all of them, IFileProvider operates with path like ˙root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to getsub3` info/content or should the path be broken into individual directories and get reference/content for each step?
For access the specific subdirectory across multiple sub directories, you could use the GetDirectoryReference method for constructing the CloudFileDirectory as follows:
var fileshare = storageAccount.CreateCloudFileClient().GetShareReference("myshare");
var rootDir = fileshare.GetRootDirectoryReference();
var dir = rootDir.GetDirectoryReference("2017-10-24/15/52");
var items=dir.ListFilesAndDirectories();
For access the specific file under the subdirectory, you could use the GetFileReference method to return the CloudFile instance as follows:
var file=rootDir.GetFileReference("2017-10-24/15/52/2017-10-13-2.png");

What is metadata? How do I create a metadata and how do I associate it to a Cloud target using Vuforia?

I modified sample CloudRecog code for my own code. I created cloud database and get AccessKeys then copied this keys to CloudReco.cpp file. What should i use for metadata. I didn't understand this. Then when i was reading sample code i saw this line: private static final String mServerURL = "https://ar.qualcomm.at/samples/cloudreco/json/". How to get my metaData url?
The Vuforia Cloud Recognition Service enables new types of applications in retail and publishing. An application using Cloud Recognition will be able to query a Cloud Database with camera images (actual recognition happens in the cloud), and then handle the matching results returned from the cloud to perform local detection and tracking.
Also, every Cloud Image Target can optionally have an associated Metadata; a target metadata is essentially nothing else than a custom user-defined blob of data that can be associated to a target and filled with custom information, as long as the data size does not exeed the allowed limits (up to 1MB per target).
Therefore, you can use the metadata as a way to store additional content that relates to a specific target, that your application will be able to process using some custom logic.
For example, your application may use the metadata to store:
a simple text message that you want your app to display on the screen of your device when the target is detected, for example:
“Hello, I am your cloud image target XYZ, you have detected me :-) !”
a simple URL string (for instance “http://my_server/my_3d_models/my_model_01.obj”) pointing to a custom network location where you have stored some other content, like a 3D model, a video, an image, or any other custom data, so that for each different image target, your application may use such URL to download the specific content;
more in general, some custom string that your application is able to process and use to perform specific actions
a full 3D model (not just the URL pointing to a model on a server, but the model itself), for example the metadata itself could embed an .OBJ 3D model, provided that the size does not exceed the allowed limits (up to 1MB)
and more ...
How do I create/store metadata for a Cloud target ?
Metadata can be uploaded together with an image target at the time you create the target itself in your Cloud Database; or you can also update the metadata of an existing target, at a later time; in either case, you can use the online TargetManager, as explained here:
https://developer.vuforia.com/resources/dev-guide/managing-targets-cloud-database-using-target-manager
or you can proceed programmatically using the VWS API, as explained here:
https://developer.vuforia.com/resources/dev-guide/managing-targets-cloud-database-using-developer-api
How can I get the metadata of a Cloud target when it is recognized ?
The Vuforia SDK offers a dedicated API to retrieve the metadata of a target in your mobile application. When a Cloud target is detected (recognized), a new TargetSearchResult is reported to the application, and the metadata can be obtained using one of these methods:
Vuforia Native SDK - C++ API: TargetSearchResult::getMetaData() - const char*
Vuforia Native SDK - Java API: TargetSearchResult.getMetaData() - String
Vuforia Unity Extension - C# API: TargetSearchResult.Metadata - string
See also the API reference pages:
https://developer.vuforia.com/resources/api/classcom_1_1qualcomm_1_1vuforia_1_1_target_search_result
https://developer.vuforia.com/resources/api/unity/struct_target_finder_1_1_target_search_result
Sample code:
For a reference sample code in native Android, see the code in the Books.java in the "Books-2-x-y" sample project.
For a reference sample code in native iOS, see the code in the BooksEAGLView.mm file in the "Books-2-x-y" sample project.
For a reference sample code in Unity, see the CloudRecoEventHandler.cs script (attached to theCloudRecognition prefab) in the Books sample; in particular, the OnNewSearchResult method shows how to get a targetSearchResult object (from which you can then get the metadata, as shown in the example code).
EDIT: this is in response to the first part of your question,: "What should i use for metadata" (not the second part about how to find the URL)
Based on their documentation (https://developer.vuforia.com/resources/dev-guide/cloud-targets):
The metadata is passed to the application whenever the Cloud Reco
target is recognized. It is up to the developer to determine the
content of this metadata – Vuforia treats it as a blob and just passes
it along to the application. The maximum size of the uploadable
metadata is 150kByte.
I added some debugging in their CloudRecognition app and saw that the payload (presumably the meta-data) they return when "recognizing" an image is:
{
"thumburl": "https://developer.vuforia.com/samples/cloudreco/thumbs/01_thumbnail.png",
"author": "Karina Borland",
"your price": "43.15",
"title": "Cloud Recognition in Vuforia",
"average rating": "4",
"# of ratings": "41",
"targetid": "a47d2ea6b762459bb0aed1ae9dbbe405",
"bookurl": "https://developer.vuforia.com/samples/cloudreco/book1.php",
"list price": "43.99"
}
The MetaData, uploaded along with your image-target in the CloudReco database, is a .txt-file, containing whatever you want.
What pherris linked, as payload from the sample-application, is in fact the contents of a .json-file that the given image-target's metadata links to.
In the sample application, the structure is as follows:
The application activates the camera and recognizes an image-target
The application then requests that specific image-target's metadata
In this case, the metadata in question is a .txt-file with the following content:
http://www.link-to-a-specific-json-file.com/randomname.json
The application then requests the contents of that specific .json-file
The specific .json-file looks as the copy-pasted text-data that pherris linked
The application uses the text-data from the .json-file to fill out the actual content of the sample application

Getting an object's links in Rackspace cloud files API

I am using the Java jclouds API for access to my Rackspace cloud files account.
I can create and list containers, and upload objects, but I can't figure out how to get the public links for an uploaded object. (I can see these public links from within the Rackspace control panel, by right-clicking on the object - there are 4 types: HTTP, HTTPS, Streaming, iOS Streaming).
The closest I can get is by using object.getInfo() to get the object's metadata. This includes a URI, but this doesn't resemble the public links I find from within the control panel.
Anyone know what I'm doing wrong?
I figured it out...
First, I should get the public URI of the object's container, not from the object.
Then I use a CloudFilesClient object. On the container I need to use getCDNMetadata("containername").getCDNUri()
Here is more information and some sample code to get the specific file CDN address.
For more details you can checkout the Java guide:
https://developer.rackspace.com/docs/cloud-files/quickstart/?lang=java
First get the cloud files api:
CloudFilesApi cloudFilesApi = ContextBuilder.newBuilder("rackspace-cloudfiles-us")
.credentials("{username}", "{apiKey}")
.buildApi(CloudFilesApi.class);
From there you can query the container:
CDNApi cdnApi = cloudFilesApi.getCDNApi("{region}");
CDNContainer cdnContainer = cdnApi.get("{containerName}");
Now with that CDNContainer you can get the specific web address that you need:
URI httpURI = cdnContainer.getUri();
URI httpsURI = cdnContainer.getSslUri();
This will get you the base URI for the container. Now to get the final address for your specific file you will need to append /{"your_file_name.extension"} to the end of the address. For example if my base URI was converted to a URL then to a String it may look like:
http://123456asdf-qwert987653.rackcdn.com/
From here I can get a file with the name example.mp4 with the following address:
http://123456asdf-qwert987653.rackcdn.com/example.mp4
This all assumes that you have already enabled CDN on the container.

If using ImageResizer with Azure blobs do I need the AzureReader2 plugin?

I'm working on a personal project to manage users of my club, it's hosted on the free Azure package (for now at least), partly as an experiment to try out Azure. Part of creating their records is to add a photo, so I've got a Contact Card view that lets me see who they are, when they came and a photo.
I have installed ImageResizer and it's really easy to resize the 10MP photos from my camera and save them to the file system locally, but it seems that for Azure I need to use their Blobs to Upload Pictures to Windows Azure Web Sites, and that's new to me. The documentation on ImageResizer says that I need to use AzureReader2 in order to work with Azure blobs but it isn't free. It also says in their best practices #5 to
Use dynamic resizing instead of pre-resizing your images.
Which is not what I was thinking, I was going to resize to 300x300 and 75x75 (for thumbnail) when creating the users record. But if I should be storing full size images as blobs and dynamically resizing on the way out then can I just use standard means to Upload a blob into a container to save it to Azure, then when I want to display the images use the ImageResizer and pass it each image to resize as required. That way not needing to use the AzureReader2, or have I misunderstood what it does / how it works?
Is there another way to consider?
I've not yet implemented cropping, but that's next to tackle when I've worked out how to actually store the images properly
With some trepidation, I'm going to disagree with astaykov here. I believe you CAN use ImageResizer with Azure WITHOUT needing AzureReader2. Maybe I should qualify that by saying 'It works on my setup' :)
I'm using ImageResizer in an MVC 3 application. I have a standard Azure account with an images container.
Here's my test code for the view:
#using (Html.BeginForm( "UploadPhoto", "BasicProfile", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<input type="file" name="file" />
<input type="submit" value="OK" />
}
And here's the corresponding code in the Post Action method:
// This action handles the form POST and the upload
[HttpPost]
public ActionResult UploadPhoto(HttpPostedFileBase file)
{
// Verify that the user selected a file
if (file != null && file.ContentLength > 0)
{
string newGuid = Guid.NewGuid().ToString();
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("images");
// Retrieve reference to the blob we want to create
CloudBlockBlob blockBlob = container.GetBlockBlobReference(newGuid + ".jpg");
// Populate our blob with contents from the uploaded file.
using (var ms = new MemoryStream())
{
ImageResizer.ImageJob i = new ImageResizer.ImageJob(file.InputStream,
ms, new ImageResizer.ResizeSettings("width=800;height=600;format=jpg;mode=max"));
i.Build();
blockBlob.Properties.ContentType = "image/jpeg";
ms.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(ms);
}
}
// redirect back to the index action to show the form once again
return RedirectToAction("UploadPhoto");
}
This is 'rough and ready' code to test the theory and could certainly stand improvement but, it does work both locally and when deployed on Azure. I can also view the images I've uploaded, which are correctly re-sized.
Hope this helps someone.
The answer to the concrete question:
If using ImageResizer with Azure blobs do I need the AzureReader2
plugin?
is YES. And as described in the Image Resizer's documentation - that plugin is used to read/process/serve images out of Blob Storage. So there is no doubt - if you are going to use Image Resizer, AzureReader2 is your needed plugin to make things right. It will take care of Blob uploads/serve.
Although I question Image Resizer's team competency on Windows Azure, since they are referencing Azure SDK v.2, while the most current version for Azure SDK is 1.8. What they mean is the Azure Storage Client Library, which has versions 1.7 and 2.x. Whereas version 2.x is recommended one to use and comes with Azure SDK 1.8. So, do not search for Azure SDK 2.0, install the latest one, which is 1.8. And by the way, use the Nuget Package Manager to install the Azure Storage Library v. 2.0.x.
You can also upload resized versions to azure. So, you first upload the original image as a blob, say with the name /original/xxx.jpg; then you create a resize of the image and upload that to azure with the name say /thumbnail/xxx.jpg. If you want to create the resized versions on the fly or on a separate thread, you may need to temporarily save the original to disk.

Resources