FIWARE object storage GE: Different type of responses obtained when downloading an image object - object

I can effectively make use of all operations available for Object Storage on my FIWARE account.
Nonetheless I have identified a strange behaviour when downloading objects from a container.
Please find below the procedure to reproduce that strange behaviour:
I upload two objects ("gonzo.png" and "elmo.png") to the container "photos"
1.1. First, by means of cloud UI (https://cloud.lab.fiware.org/#objectstorage/containers/) I manually upload the object "gonzo.png"
1.2. Later, by following the instructions from Object Storage GE programmer's guide I programmatically (or with the help of standalone Rest Client) upload the object "elmo.png"
I download the objects from the container "photos"
2.1 First, by following the instructions from Object Storage GE programmer's guide I successfully download object "gonzo.png". The webservice response body is the binary content of such object.
2.2. Later, by following same instructions as in step 2.1 I try to download the object "elmo.png". Now the webservice response body is a json with metadata and the binary content of the object.
What can I do receive a standard response body for both objects? Either binary or either json.
Why do I get a different response if the object is originally uploaded via Cloud UI or via external tool (program or rest client) ?
As in Download blob from fiware object-storage I have already tried to set the header response_type: text and the behaviour is the same.

There are many object stores out there, having different APIs.
The Object Storage GE was initially based on the CDMI API [1].
Currently, it is based on Openstack Swift [2].
The Cloud Portal still uses some of the CDMI features and specifically it may do 64-bit encoding of some types of objects in which case the object content is a json which contains the metadata and a base64 encoding of the data. I suspect that this is what happened to the object you have created using the cloud UI.
Thus, please use Swift native API for all operations.
The API is well documented here: http://developer.openstack.org/api-ref-objectstorage-v1.html
and the python examples in the programmer guide (https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Object_Storage_-_User_and_Programmers_Guide) also use the Native API.
[1] google for SNIA CDMI. Having less then 10 replutation I cannot put too many links
[2] google for Openstack Swift

Related

OpenAPI Generator issue with Destination service API specification

I want to get all destinations on subaccount and instance level. In SAP API business Hub, I found the API information and "SAP Cloud SDK" tab to generate code by OpenAPI generator.
https://api.sap.com/api/SAP_CP_CF_Connectivity_Destination/overview
I downloaded the API specification and added dependencies into Cloud SDK for Java project. The code is generated successfully with some errors (unknown models)in generated api classes.
For example in DestinationsOnSubaccountLevelApi.class, model OneOfDestinationNameOnly is imported and used in method but it is not generated in model package.
I looked into API specification and found that there were two types of response entity. That is the reason why the code could not be generated properly. I can modify the API specification to make it work but it should not be the long term solution. Is there any other way to fix this issue?
Unfortunately the SAP Cloud SDK Generator for Open API services is not yet able to understand oneOf relationship that is modeled in the specification.
As an alternative, would you consider using the DestinationAccessor API for resolving single destinations?
You can also directly instantiate an ScpCfDestinationLoader, which allows for querying all destinations:
ScpCfDestinationLoader loader = new ScpCfDestinationLoader();
DestinationOptions options = DestinationOptions
.builder()
.augmentBuilder(ScpCfDestinationOptionsAugmenter.augmenter().retrievalStrategy(ScpCfDestinationRetrievalStrategy.ALWAYS_SUBSCRIBER))
.build();
Try<Iterable<ScpCfDestination>> destinations = loader.tryGetAllDestinations(options);
Similar to the default behavior of DestinationAccessor API, in the code above only the subscriber account will be considered. Other options are:
ScpCfDestinationRetrievalStrategy.ALWAYS_SUBSCRIBER
ScpCfDestinationRetrievalStrategy.ALWAYS_PROVIDER
ScpCfDestinationRetrievalStrategy.SUBSCRIBER_THEN_PROVIDER

Azure blob storage file being accessed by multiple azure nodes

I have multiple JSON format files which is being pushed to the Azure storage account under a specific container. There are n number of files in the container.
And 4 to 8 nodes which will be accessing the Azure storage container to downloaded the files locally, the download code is written in java.
Since there are n number of files and multiple file accessing the container at the same time, how to avoid the situation that the same file is downloaded by another server?
Example:
Azure container has 1.json, 2.json, 3.json, etc which are > 35 MB size.
batch-process-node1 -> starts downloading 1.json
batch-process-node2 -> starts downloading 2.json
batch-process-node3 -> should not start downloading the 1.json
Is there any logic to be built for each node which has the java process to download the file uniquely?
Is there any setting that can be set in the Azure storage container?
--
Trying to use the Camel Azure-bolb component, using the block blob (blobType).
New to Azure storage blob, any help is appreciated.
Since we are already using Apache camel in the code, we tried to use camel azure-blob component to address the issue. Below is the approach we used, still the race condition is acceptable for our scenario.
Camel route started with timer consumer, and producer to get the list of blob from container using below endpoint,
azure-blob://<account>/<container>?credentials=#storagecredentials&blobType=blockBlob&operation=listBlobs
Note: storagecredential is a bean of type StorageCredentialsAccountAndKey class.
Created a java class implementing the Processor of camel, and in process() method, using the exchange.getIn().getBody() => which provides an iterable object with has ListBlobItem.
first i set the meta data of the blob using below endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=updateBlockBlob&blobMetadata=#blobMetaData1
Note: blobMetaData1 is bean created in the context file.
<util:map id="blobMetaData1" map-class="java.util.HashMap">
<entry key="someKey" value="someValue"/>
</util:map>
Key thing: In this class process method
validate the metadata is being set or not, if set then the
process is already picked the blob. so it won't be picked again
assuming if the process executed in different server.
got the blob name from the ListBlobItem individual blob item.
using getURI() and forming the endpoint within this processor class.
in order to invoke the custom endpoint, used to set it an customer
header value of In message.
using the recipientList camel option which invokes the metadata endpoint to update the specific blob.
Then used another processor to form the download blob endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=getBlob
and using the recipientList to get the processor endpoint from message header.
finally forming another delete endpoint which will delete once its downloaded.

How to create Index with custom analyzers from json file in Azure Search .NET SDK?

I had read that the Azure Search .NET SDK uses NewtonSoft.Json to convert it's models to/from json in it's underlying REST API calls so I've been doing the same in my own app.
I have a simple app which creates a new Index using the .NET SDK. To do this, I was defining my Index in a json file, using the format outlined here https://learn.microsoft.com/en-us/rest/api/searchservice/create-index and then I was converting this to a Microsoft.Azure.Search.Models.Index object using Newtonsoft.
var index = JsonConvert.DeserializeObject<Microsoft.Azure.Search.Models.Index>(System.IO.File.ReadAllText("config.json");
This was working fine before I configured custom Analyzers, but now that I have custom Analyzers in my config, the Analyzers, Tokenizers, and TokenFilters are not being resolved into the correct types. ie. my custom Analyzer is being deserialized as a Microsoft.Azure.Search.Models.Analyzer, instead of Microsoft.Azure.Search.Models.CustomAnalyzer, same goes for the Tokenizers and TokenFilters, they are being deserialized into the base types.
Is there an easy way I can create an Index like this in the .NET SDK from a json file?
Unfortunately this is not an officially supported scenario. While it works for simple index definitions, we're working to understand what we need to do to be able to support all cases.
Please post your feature request on our User Voice page to help us prioritize: https://feedback.azure.com/forums/263029-azure-search
In the meantime, you might be able to get it working yourself by adapting the JsonSerializerSettings initialization code at the bottom of this file.

generateSharedAccessSignature not adding sv parameter?

I'm trying to generate a Shared Access Signature and am using the code here (http://blogs.msdn.com/b/brunoterkaly/archive/2014/06/13/how-to-provision-a-shared-access-signatures-that-allows-clients-to-upload-files-to-to-azure-storage-using-node-js-inside-of-azure-mobile-services.aspx) for a custom API to generate the SAS.
It seems to be missing the sv=2014-02-14 parameter when calling "generateSharedAccessSignature()".
The SAS url doesn't seem to work when I try it (getting a 400 xml not valid error) but if I try a SAS generated from Azure Management Studio the URL contains the "sv" parameter and works when I attempt to upload with it.
Any ideas?
Based on the Storage Service REST API Documentation, sv parameter in Shared Access Signature is introduced in storage service version 2014-02-14. My guess is that Azure Mobile Service is using an older version of the storage service API and this is the reason you don't see sv parameter in your SAS token.
You could be getting 400 error (invalid XML) because of this. In the earlier version of storage service API, the XML syntax for committing block list was different than what is used currently. I have had one more user come to my blog post complaining about the same error. Please try the following XML syntax when performing a commit block list operation and see if the error is gone:
<?xml version="1.0" encoding="utf-8"?>
<BlockList>
<Block>[base64-encoded-block-id]</Block>
<Block>[base64-encoded-block-id]</Block>
...
<Block>[base64-encoded-block-id]</Block>
</BlockList>
Please notice that we're not using Latest node. Instead we're using Block node.
Leaving the sv parameter out and setting it as part of the PUT request header worked using:
xhr.setRequestHeader('x-ms-version','2014-02-14');
You can check out this example for an azure file upload script: http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/
...which will work with the generated SAS from the question's original blog link - http://blogs.msdn.com/b/brunoterkaly/archive/2014/06/13/how-to-provision-a-shared-access-signatures-that-allows-clients-to-upload-files-to-to-azure-storage-using-node-js-inside-of-azure-mobile-services.aspx
Add the request header in the beforeSend like so:
beforeSend: function(xhr) {
xhr.setRequestHeader('x-ms-version','2014-02-14');
},

What is metadata? How do I create a metadata and how do I associate it to a Cloud target using Vuforia?

I modified sample CloudRecog code for my own code. I created cloud database and get AccessKeys then copied this keys to CloudReco.cpp file. What should i use for metadata. I didn't understand this. Then when i was reading sample code i saw this line: private static final String mServerURL = "https://ar.qualcomm.at/samples/cloudreco/json/". How to get my metaData url?
The Vuforia Cloud Recognition Service enables new types of applications in retail and publishing. An application using Cloud Recognition will be able to query a Cloud Database with camera images (actual recognition happens in the cloud), and then handle the matching results returned from the cloud to perform local detection and tracking.
Also, every Cloud Image Target can optionally have an associated Metadata; a target metadata is essentially nothing else than a custom user-defined blob of data that can be associated to a target and filled with custom information, as long as the data size does not exeed the allowed limits (up to 1MB per target).
Therefore, you can use the metadata as a way to store additional content that relates to a specific target, that your application will be able to process using some custom logic.
For example, your application may use the metadata to store:
a simple text message that you want your app to display on the screen of your device when the target is detected, for example:
“Hello, I am your cloud image target XYZ, you have detected me :-) !”
a simple URL string (for instance “http://my_server/my_3d_models/my_model_01.obj”) pointing to a custom network location where you have stored some other content, like a 3D model, a video, an image, or any other custom data, so that for each different image target, your application may use such URL to download the specific content;
more in general, some custom string that your application is able to process and use to perform specific actions
a full 3D model (not just the URL pointing to a model on a server, but the model itself), for example the metadata itself could embed an .OBJ 3D model, provided that the size does not exceed the allowed limits (up to 1MB)
and more ...
How do I create/store metadata for a Cloud target ?
Metadata can be uploaded together with an image target at the time you create the target itself in your Cloud Database; or you can also update the metadata of an existing target, at a later time; in either case, you can use the online TargetManager, as explained here:
https://developer.vuforia.com/resources/dev-guide/managing-targets-cloud-database-using-target-manager
or you can proceed programmatically using the VWS API, as explained here:
https://developer.vuforia.com/resources/dev-guide/managing-targets-cloud-database-using-developer-api
How can I get the metadata of a Cloud target when it is recognized ?
The Vuforia SDK offers a dedicated API to retrieve the metadata of a target in your mobile application. When a Cloud target is detected (recognized), a new TargetSearchResult is reported to the application, and the metadata can be obtained using one of these methods:
Vuforia Native SDK - C++ API: TargetSearchResult::getMetaData() - const char*
Vuforia Native SDK - Java API: TargetSearchResult.getMetaData() - String
Vuforia Unity Extension - C# API: TargetSearchResult.Metadata - string
See also the API reference pages:
https://developer.vuforia.com/resources/api/classcom_1_1qualcomm_1_1vuforia_1_1_target_search_result
https://developer.vuforia.com/resources/api/unity/struct_target_finder_1_1_target_search_result
Sample code:
For a reference sample code in native Android, see the code in the Books.java in the "Books-2-x-y" sample project.
For a reference sample code in native iOS, see the code in the BooksEAGLView.mm file in the "Books-2-x-y" sample project.
For a reference sample code in Unity, see the CloudRecoEventHandler.cs script (attached to theCloudRecognition prefab) in the Books sample; in particular, the OnNewSearchResult method shows how to get a targetSearchResult object (from which you can then get the metadata, as shown in the example code).
EDIT: this is in response to the first part of your question,: "What should i use for metadata" (not the second part about how to find the URL)
Based on their documentation (https://developer.vuforia.com/resources/dev-guide/cloud-targets):
The metadata is passed to the application whenever the Cloud Reco
target is recognized. It is up to the developer to determine the
content of this metadata – Vuforia treats it as a blob and just passes
it along to the application. The maximum size of the uploadable
metadata is 150kByte.
I added some debugging in their CloudRecognition app and saw that the payload (presumably the meta-data) they return when "recognizing" an image is:
{
"thumburl": "https://developer.vuforia.com/samples/cloudreco/thumbs/01_thumbnail.png",
"author": "Karina Borland",
"your price": "43.15",
"title": "Cloud Recognition in Vuforia",
"average rating": "4",
"# of ratings": "41",
"targetid": "a47d2ea6b762459bb0aed1ae9dbbe405",
"bookurl": "https://developer.vuforia.com/samples/cloudreco/book1.php",
"list price": "43.99"
}
The MetaData, uploaded along with your image-target in the CloudReco database, is a .txt-file, containing whatever you want.
What pherris linked, as payload from the sample-application, is in fact the contents of a .json-file that the given image-target's metadata links to.
In the sample application, the structure is as follows:
The application activates the camera and recognizes an image-target
The application then requests that specific image-target's metadata
In this case, the metadata in question is a .txt-file with the following content:
http://www.link-to-a-specific-json-file.com/randomname.json
The application then requests the contents of that specific .json-file
The specific .json-file looks as the copy-pasted text-data that pherris linked
The application uses the text-data from the .json-file to fill out the actual content of the sample application

Resources