Set metadata in REST request to put blob in AZURE - azure

i am able to upload file to azure blob using REST api provided by Azure.
i want to set metadata at the time i am doing request for put blob, when i am setting it into header as shown here i am unble to upload file and getting following exception org.apache.http.client.ClientProtocolException.
from the last line of the code below
HttpPut req = new HttpPut(uri);
req.setHeader("x-ms-blob-type", blobType);
req.setHeader("x-ms-date", date);
req.setHeader("x-ms-version", storageServiceVersion);
req.setHeader("x-ms-meta-Cat", user);
req.setHeader("Authorization", authorizationHeader);
HttpEntity entity = new InputStreamEntity(is,blobLength);
req.setEntity(entity);
HttpResponse response = httpClient.execute(req);
regarding the same, i have two questions.
can setting different metadata, avoid overwriting of file? See my question for the same here
if Yes for first question, how to set metadata in REST request to put blob into Azure?
please help

So a few things are going here.
Regarding the error you're getting, it is because you're not adding your metadata header when calculating authorization header. Please read Constructing the Canonicalized Headers String section here: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx.
Based on this, you would need to change the following line of code (from your blog post)
String canonicalizedHeaders = "x-ms-blob-type:"+blobType+"\nx-ms-date:"+date+"\nx-ms-version:"+storageServiceVersion;
to
String canonicalizedHeaders = "x-ms-blob-type:"+blobType+"\nx-ms-date:"+date+"\nx-ms-meta-cat"+user+"\nx-ms-version:"+storageServiceVersion;
(Note: I have just made these changes in Notepad so they may not work. Please go to the link I mentioned above for correctly creating the canonicalized headers string.
can setting different metadata, avoid overwriting of file?
Not sure what you mean by this. You can update metadata of a blob by performing Set Blob Metadata operation on a blog.

Related

Firebase Storage remove custom metadata key

I couldn't remove a custom metadata key from a file in Firebase storage.
This is what I tried so far:
blob = bucket.get_blob("dir/file")
metadata = blob.metadata
metadata.pop('custom_key', None) # or del metadata['custom_key']
blob.metadata = metadata
blob.patch()
I also tried to set its value to None but it didn't help.
It seems that there are some reasons that could be affecting you to delete the custom metadata. I will address them individually, so it's easier for understanding.
First, it seems that when you read the metadata with blob.metadata, it only returns as a read-only - as clarified here. So, your updates will not work as you would like, using the way you are trying. The second reason, it seems that saving the metadata again back to blob, follows a different order than what you are trying - as shown here.
You can give it a try using the below code:
blob = bucket.get_blob("dir/file")
metadata = blob.metadata
metadata.pop{'custom_key': None}
blob.patch()
blob.metadata = metadata
While this code is untested, I believe it might help you changing the orders and avoid the blob.metadata read-only situation.
In case this doesn't help you, I would recommend you to raise an issue for in the official Github repository for the Python library on Cloud Storage, for further clarifications from the developers.

Handling of etags in batch request using SAP Cloud SDK

I am trying to carry out a batch request including a create, update and a delete (all are different salesorders). As per this question here which deals with something similar, I have done a get for the items I want to update and delete before I add them to the batch request. I am using the SalesOrder.builder() to prepare the SalesOrder I want to create.
final ErpHttpDestination destination = DestinationAccessor.getDestination(DESTINATION_NAME)
.asHttp().decorate(DefaultErpHttpDestination::new);
final SalesOrderItem salesOrderItem1 = SalesOrderItem.builder().material(material)
.requestedQuantityUnit(requestedQuantityUnit).build();
final SalesOrder salesOrder1 = SalesOrder.builder().distributionChannel(distributionChannel)
.salesOrderType(salesOrderType).salesOrganization(salesOrganization)
.organizationDivision(organizationDivision).soldToParty(soldToParty)
.item(salesOrderItem1).build();
final SalesOrder orderToUpdate = new GetSingleSalesOrderCommand(orderToUpdateID, destination,
new DefaultSalesOrderService()).execute();
orderToUpdate.setSoldToParty(updateSoldToParty);
final SalesOrder orderToDelete = new GetSingleSalesOrderCommand(orderToDeleteID, destination,
new DefaultSalesOrderService()).execute();
SalesOrderServiceBatch service = new DefaultSalesOrderServiceBatch(
new DefaultSalesOrderService());
BatchResponse bRes = service.beginChangeSet().createSalesOrder(salesOrder1).updateSalesOrder(orderToUpdate)
.deleteSalesOrder(orderToDelete).endChangeSet().execute(destination);
I am then logging the BatchResponse and see I am getting a Batch Response Failure:
eTag handling not supported for http method 'POST'
I have searched for this error but can't find any resolution to it. Any ideas?
Thanks.
UPDATE: Increasing the logging to DEBUG I can see the batch request that is being sent and can see that there is an if-match header being added to the create request, which doesn't make sense as it can't match something that doesn't exist yet.
"msg":"--batch_123\r\nContent-Type: multipart/mixed;
boundary=changeset_(changeset number)\r\n\r\n--
changeset_(changeset number)\r\nContent-Type:
application/http\r\nContent-Transfer-Encoding: binary\r\n\r\nPOST
/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder HTTP/1.1\r\nContent-
Length:
193\r\nIf-Match: W/\"datetimeoffset'2020-05-
01T11%3A51%3A16.8631720Z'\"\r\nAccept:
application/json;odata=verbose\r\nContent-Type:......
The I get the error:
Inner Error:
"msg":"batch
responseFailure(com.sap.cloud.sdk.odatav2.connectivity.ODataException:
null: <?xml version=\"1.0\" encoding=\"utf-8\"?><error
xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">
<code>/IWFND/CM_MGW/537</code><message xml:lang=\"en\">eTag handling not
supported for http method 'POST'</message><innererror>...
However, what does work is if I wrap each request in its own changeset e.g.
service
.beginChangeSet().createSalesOrder(order).endChangeSet()
.beginChangeSet().updateSalesOrder(orderToUpdate).endChangeSet()
.beginChangeSet().deleteSalesOrder(orderToDelete).endChangeSet()
.execute(destination);
Edit:
This is fixed as of version 3.25.0.
Initial Answer:
This seems to be a bug. I was able to reproduce this with a different service and the behaviour is the same: The if-match header is incorrectly applied to the POST operation as well.
When debugging it seems like the request is build up correctly with the header only being present on update and delete. However, it seems that when the batch request is serialised to JSON it gets added to all requests.
So until this is fixed the workaround is isolating these operations via change sets, as you already pointed out.
Looks like eTag handling is not supported for your endpoint.
Now you can do the following to omit eTag headers:
orderToUpdate.setVersionIdentifier(null);
orderToDelete.setVersionIdentifier(null);
However I'm not sure how 'POST' fits the error description, because update uses PATCH and delete uses DELETE. The only POST that I expect would be coming from create. But we do not add headers for entity version identifiers (eTag) in OData create operation. If the same error still comes up, please try again without running createSalesOrder(salesOrder1).

Can't seem to find the issue with the requestID parameter for the request header

I am trying to pull data from a REST API that uses a "similar standard to JSON RPC". The params I am passing look right according to the documentation here and here.
The error I am receiving is ...message:"Header missing request ID".... I am unsure what I am missing that would properly declare the requestID.
I have looked at the documentation provided via the API I am trying to pull data from but it's not very helpful considering it's all in PHP and cURL. I am trying to complete this task using python-requests.
getParams = {'method': 'getCustomers', 'params':{'where':'', 'limit': 2}, 'id': 'getCustomers'}
Result:
{"result":null,"error":{"code":102,"message":"Header missing request ID","data":[]},"id":null}
The return result should contain a list of All Customers and their attributes in JSON format.
Turns out there is nothing wrong with the code I am using. There is an issue with the API I am attempting to call.
In my situation, I was getting the same error back and was required to send a X-Request-ID header. I fixed it by adding the following to my headers:
headers = {
'X-Request-ID': str(uuid.uuid1()) # generate GUID based on host & time
...
Note that (for me) the GUID needed to be of a specific format (e.g. matching the Regex ^[{]?[0-9a-fA-F]{8}-([0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}[}]?$
taken from https://www.geeksforgeeks.org/how-to-validate-guid-globally-unique-identifier-using-regular-expression/). For example, it still gave the same error if I just sent "test".

SOAP UI how to configure a PUT request body programmatically

I'm configuring some requests programmatically in my test cases, I can set headers, custom properties, teardown scripts, etc. however I can't find how to set a standard json body for my put requests.
Is there any possibility from the restMethod class ?
So far I end up getting the method used :
restService = testRunner.testCase.testSuite.project.getInterfaceAt(0)
resource = restService.getOperationByName(resource_name)
request = resource.getRequestAt(0)
httpMethod = request.getMethod()
if (httpMethod.toString().equals("PUT"))
but then I'm stuck trying to find how to set a standard body for my PUT requests.
I try with the getRequestParts() method but it didn't give me what I expected ...
can anyone help, please
thank you
Alexandre
I’ve managed this. I had a tests of tests where I wanted to squirt the content of interest into the “bare bones” request. Idea being that I can wrap this in a data driven test. Then, for each row in my data spreadsheet I pull in the request body for my test. At first I simply pulled the request from a data source value in my spreadsheet, but this became unmanageable in my spreadsheet.
So, another tactic. In my test data sheet (data source) I stored the file name that contains the payload I want to squirt in.
In the test itself, I put in a groovy step immediately before the the step I want to push the payload into.
The groovy script uses the data source to firstly get the file name containing the payload, I then read the contents of the file.
In the step I want to push the data into, I just use a get from data, e.g. {groovyStep#result}.
If this doesn’t completely make sense, let me know and I’ll update with screenshot when I have access to SoapUi.

Azure Table Storage access time - inserting/reading from

I'm making a program that stores and reads from Azure tables some that are stored in CSV files. What I got are CSV files that that can have various number of columns, and between 3k and 50k rows. What I need to do is upload that data in Azure table. So far I managed to both upload data and retrieve it.
I'm using REST API, and for uploading I'm creating XML batch request, with 100 rows per request. Now that works fine, except it takes a bit too long to upload, ex. for 3k rows it takes around 30seconds. Is there any way to speed that up? I noticed that it takes most time when proccessing response ( for ReadToEnd() command ). I read somewhere that setting proxy to null could help, but it doesn't do much in my case.
I also found somewhere that it is possible to upload whole XML request to blob and then execute it from there, but I couldn't find any example for doing that.
using (Stream requestStream = request.GetRequestStream())
{
requestStream.Write(content, 0, content.Length);
}
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
Stream dataStream = response.GetResponseStream();
using (var reader = new StreamReader(dataStream))
{
String responseFromServer = reader.ReadToEnd();
}
}
As for retrieving data from azure tables, I managed to get 1000 entities per request. As for that, it takes me around 9s for CS with 3k rows. It also takes most time when reading from stream. When I'm calling this part of the code (again ReadToEnd() ):
response = request.GetResponse() as HttpWebResponse;
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string result = reader.ReadToEnd();
}
Any tips?
As you mentioned you are using REST API you may have to write extra code and depend on your own methods to implement performance improvement differently then using client library. In your case using Storage client library would be best as you can use already build features to expedite insert, upsert etc as described here.
However if you were using Storage Client Library and ADO.NET you can use the article below which is written by Windows Azure Table team as supported way to improve Azure Access Performance:
.NET and ADO.NET Data Service Performance Tips for Windows Azure Tables

Resources