I can get a document response using HTTP trigger in FunctionsApp on Azure like a rest API However, I cannot delete the document.
I'm selecting the DELETE as Selected 'HTTP methods' but I am not clear what should I do for next step.
In Input Parameters, when I write 'Delete from mydocument' in SQL Query (optional) textbox, it doesn't work.
Probably I need to change the 'run.csx' code but how?
any clue?
I believe the 'SQL Query' section is for the input binding for 'finding' the document that you wish to work with. This still may be useful depending how you want to proceed. You can still use a HTTP Delete trigger if you want, but merely 'saying' that its a DELETE verb doesn't automatically perform a delete. Instead, it means that you can 'invoke' the function only if you specify it as a DELETE action.
I've previous deleted documents by binding directly to the DocumentClient itself, and delete the Document programatically.
[FunctionName("DeleteDocument")]
public static async Task Run(
[TimerTrigger("00:01", RunOnStartup = true)] TimerInfo timer,
[DocumentDB] DocumentClient client,
TraceWriter log)
{
var collectionUri = UriFactory.CreateDocumentCollectionUri("ItemDb", "ItemCollection");
var documents = client.CreateDocumentQuery(collectionUri);
foreach (Document d in documents)
{
await client.DeleteDocumentAsync(d.SelfLink);
}
}
See DocumentDBSamples
Related
I have 2 collections in CosmosDB, Stocks and StockPrices.
StockPrices collection holds all historical prices, and is constantly updated.
I want to create Azure Function that listens to StockPrices updates (CosmosDBTrigger) and then does the following for each Document passed by the trigger:
Find stock with matching ticker in Stocks collection
Update stock price in Stocks collection
I can't do this with CosmosDB input binding, as CosmosDBTrigger passes a List (binding only works when trigger passes a single item).
The only way I see this working is if I foreach on CosmosDBTrigger List, and access CosmosDB from my function body and perform steps 1 and 2 above.
Question: How do I access CosmosDB from within my function?
One of the CosmosDB binding forms is to get a DocumentClient instance, which provides the full range of operations on the container. This way, you should be able to combine the change feed trigger and the item manipulation into the same function, like:
[FunctionName("ProcessStockChanges")]
public async Task Run(
[CosmosDBTrigger(/* Trigger params */)] IReadOnlyList<Document> changedItems,
[CosmosDB(/* Client params */)] DocumentClient client,
ILogger log)
{
// Read changedItems,
// Create/read/update/delete with client
}
It's also possible with .NET Core to use dependency injection to provide a full-fledged custom service/repository class to your function instance to interface to Cosmos. This is my preferred approach, because I can do validation, control serialization, etc with the latest version of the Cosmos SDK.
You may have done so intentionally, but just mentioning to consider combining your data into a single container partitioned by, for example, a combination of record type (Stock/StockPrice) and identifier. This simplifies things and can be more cost/resource efficient relative to multiple containers.
Ended up going with #Noah Stahl's suggestion. Leaving this here as an alternative.
Couldn't figure out how to do this directly, so came up with a work-around:
Add function with CosmosDBTrigger on StockPrices collection with Queue output binding
foreach over Documents from the trigger, serialize and add to the Queue
Add function with QueueTrigger, CosmosDB input binding for Stocks collection (with PartitionKey and Id set to StockTicker), and CosmosDB output binding for Stocks collection
Update Stock from CosmosDB input binding with values from the QueueTrigger
Assign updated Stock to CosmosDB output binding parameter (updates record in DB)
This said, I'd like to hear about more straightforward ways of doing this, as my approach seems like a hack.
Overview:
We have a third party what hosts a text value at a given endpoint. Using a 'Get' request to a url where we also pass a key and parameters returns a string values (of decimal numbers and a space).
I created some apex code, including #InvocableMethod, so I could all the apex from a flow where I pass in the URL, and then the text is returned to the flow. I then go on to update a record.
Here is the Method,
there is also a class, FR_Amount_Variables , storing the URL and String #InvocableVariable values.
public class FR_Amount_Sync {
#InvocableMethod(label='FR Amount Raised Get')
public static List<FR_Amount_Variables>getFRamount (List<FR_Amount_Variables> inputURL) {
FR_Amount_Variables amtvar = new FR_Amount_Variables();
List<FR_Amount_Variables> getFRamount = new List<FR_Amount_Variables>();
string endpoint = inputURL[0].URL;
Http http = new Http();
HttpRequest request = new HttpRequest();
request.setEndpoint(endpoint);
request.setMethod('GET');
HttpResponse response = http.send(request);
string Amounts= response.getBody();
Amounts= Amounts.replaceAll( '\\s+', '');
if(String.isEmpty(Amounts)){
Boolean isEmpty = true;
Amounts = '0.00';
}
decimal amountss = decimal.valueOf(Amounts);
amtvar.amount = amountss;
getFRamount.add(amtvar);
return getFRamount;
}
}
The image of the flow can be seen below
Update Flow
Issue:
When I run the flow in Debug mode, set the 3 input variables and run, the flow executes the apex and updates the specified record correctly.
Likewise if I preset the flow's input variables (add a default value), and the just run the flow, the apex and record updates succeed with the record being update with the correct value from the 3rd party.
The issue is when I try to automatically run the flow, either by Process Builder, or by Mass Action Scheduler, I receive system exception errors.
An Apex error occurred:
System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out
and
An Apex error occurred: System.CalloutException: Callout loop not allowed
respectively.
I was wondering if there is anyway to trigger a flow that doesn't trigger an error. Otherwise is there a way I can make a httprequest 'get' callout and then update a record with the received record.
We cannot do DML before the Callout in the same transaction.
DML can be done after the Callout.
So, the best practice is to do Callout using future method. In this way, the flow will handle the DML operations.
For example, check this link -
https://www.infallibletechie.com/2020/04/how-to-do-callout-from-flow-in.html
I have a azure function with cosmos db trigger which makes some calculations and write results to db. If something goes wrong i want to have a possibility to start from the first item or specific item make calculations again. Is it possible? Thanks
public static void Run([CosmosDBTrigger(
databaseName: "db",
collectionName: "collection",
ConnectionStringSetting = "DocDbConnStr",
CreateLeaseCollectionIfNotExists = true,
LeaseCollectionName = "leases")]IReadOnlyList<Document> input, TraceWriter log)
{
...
}
Right now, the StartFromBeginning option is not exposed to the Cosmos DB Trigger. The default behavior is to start receiving changes from the moment the Function starts running, leases/checkpoints will be generated in case the Host/Runtime shutsdown so when the Host/Runtime is back up it will pickup from the last checkpointed item.
The Trigger does not implement dead-lettering or error handling as it might generate infinite-loops / unexpected billing / multiple processing of the same batch if the error is not related to the batch itself (for example, you process the documents and then send an email and the email fails, the entire batch would be re-processed for an error not related to the feed itself), so we recommend users to implement their own try/catch or error handling logic inside the Function's code. It's the same approach as the Event Hub Trigger.
That being said, we are in the process of exposing several new options on the Trigger and there is a contributor working on an advanced retrying mechanism.
As #Matias Quaranta and #Pankaj Rawat say in the comments, the accept answer is old and is no longer true. You can use StartFromTheBeginning as a C# attribute within your azure function's parameter list like so:
[FunctionName(nameof(MyAzureFunction))]
public async Task RunAsync([CosmosDBTrigger(
databaseName: "myCosmosDbName",
collectionName: "myCollectionName",
ConnectionStringSetting = "cosmosConnectionString",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true,
MaxItemsPerInvocation = 1000,
StartFromBeginning = true)]IReadOnlyList<Document> documents)
{
....
}
Please change the accepted answer.
The current offsets (positions in Cosmos DB change feed) are managed by clients, Azure Functions runtime in this case.
Functions store the offsets in lease collection (it's called leases in your example).
To restart from a specific item, you would have to make a snapshot of documents in leases collection at some point, and then restore your current collection to that snapshot when needed.
I am not familiar with a tool that automates that for you, other than generic tools working with Cosmos DB collections.
Check startFromBeginning option available in Function v2. Unfortunately, I'm still using V1 and not able to verify.
When set, it tells the Trigger to start reading changes from the beginning of the history of the collection instead of the current time. This only works the first time the Trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this to true when there are leases already created has no effect.
Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency
I am new to Cloudant but have found it useful for a first stage of IoT data. But I need to subscribe to changes based on an id field that is separate from the _id and is unique to the sensor that is sending the data. The examples that I’ve seen so far haven’t helped with this problem. What I’m doing now is sending a separate json doc for each post, so it should return new docs with this sensor id. The json docs sometimes come in by the second but it can be hours as well.
I’m using c# in a .Net web app. The code below creates a call to the Cloudant database and returns the data that I want based on an index that was created for the field SensorID,
json =
{{
"selector": {
"SensorID" : "h7365cf3-17bc-4422-b436-f7bcf12b2e2a"
},
"fields": [
"Data"
]
}}
url = My Cloudant url + ” /_find”.
This returns all docs with the sensorID field that corresponds to the SensorID value in the json query, but just the json object of each doc nested in the Data field.
using (WebClient client = new WebClient())
{
byte[] postBytes = System.Text.ASCIIEncoding.UTF8.GetBytes(json.ToString());
client.UseDefaultCredentials = true;
client.Credentials = new NetworkCredential(username, password);
client.Headers[HttpRequestHeader.ContentType] = "application/json";
var response = client.UploadData(url, "POST", postBytes);
JObject iJson = JObject.Parse(client.Encoding.GetString(response));
return parseIncoming(iJson);
}
When the call is to My Cloudant url + “GET /_DB_UPDATES”, it returns information regarding changes to the whole database. This can be set up as a continuous feed.
I was hoping that this meant that i could subscribe to changes in documents to get new data coming, like Redis Pub/Sub. I’m starting to think that this might not be the case, but if anybody can show me how to do it I would be grateful.
As #adasilva70 said, you need to use the _changes feed.
You can filter changes with an appropriate filter function (so that only changes regarding the documents you're interested in show up).
You can get all updates since a given sequence point (everything since the last data you got) and/or you can use long polling or continuous mode for instant notifications.