Azure Digital twins - Twin to Twin update - azure

first of all, I'd like to make it clear I have no experience at all with C#, so it might be the case that this is a very trivial question.
I am trying to follow this tutorial: https://learn.microsoft.com/en-us/azure/digital-twins/how-to-send-twin-to-twin-events
using this function sample: https://github.com/Azure-Samples/azure-digital-twins-getting-started/blob/main/azure-functions/twin-updates/ProcessDTRoutedData.cs
However, I am facing a weird error.
The idea of this code is that every time a child twin is updated somehow, for example with data coming from an IoT HUB, the parent twin would also be updated. For example, we have a room with 2 thermostats, when there is an update on the thermostats the room temperature would be updated with an average of the temperatures of the thermostats.
My issue is the following:
if my thermostat 1 has a value of 25oC and thermostat 2 has a value of 27oC at timestep 1 my room temperature should be 26oC, however, I am getting no data, I only get data on the parent twin on timestep 2, when a new update is done on the thermostats, so if at the timestep 2 thermostat 1 has a temperature of 30oC and thermostat 2, a temp of 32oC, the room should show 31oC, however, it shows the 26oC from timestep 1, as you can see we keep having this delayed reaction.
to be a bit more specific on my case, I have a device that aggregates (sums) readings from other 4 devices, the total on the aggregator looks off by one iteration everytime this function is called
// Default URL for triggering event grid function in the local environment.
// http://localhost:7071/runtime/webhooks/EventGrid?functionName={functionname}
using IoTHubTrigger = Microsoft.Azure.WebJobs.EventHubTriggerAttribute;
using Azure;
using Azure.Core.Pipeline;
using Azure.DigitalTwins.Core;
using Azure.Identity;
using Microsoft.Azure.EventGrid.Models;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System;
using System.Net.Http;
using System.Linq;
using System.Reflection.Metadata.Ecma335;
using System.Threading.Tasks;
using System.Collections.Generic;
using TwinUpdatesSample.Dto;
namespace TwinUpdatesSample
{
public class ProcessDTRoutedData
{
private static HttpClient _httpClient = new HttpClient();
private static string _adtServiceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL");
/// <summary>
/// The outcome of this function is to get the average floor temperature and humidity values based on the rooms on that floor.
///
/// 1) Get the incoming relationship of the room. This will get the floor twin ID
/// 2) Get a list of all the rooms on the floor and get the humidity and temperature properties for each
/// 3) Calculate the average temperature and humidity across all the rooms
/// 4) Update the temperature and humidity properties on the floor
/// </summary>
/// <param name="eventGridEvent"></param>
/// <param name="log"></param>
/// <returns></returns>
[FunctionName("ProcessDTRoutedData")]
public async Task Run([EventGridTrigger] EventGridEvent eventGridEvent, ILogger log)
{
log.LogInformation("ProcessDTRoutedData (Start)...");
DigitalTwinsClient client;
DefaultAzureCredential credentials;
// if no Azure Digital Twins service URL, log error and exit method
if (_adtServiceUrl == null)
{
log.LogError("Application setting \"ADT_SERVICE_URL\" not set");
return;
}
try
{
//Authenticate with Azure Digital Twins
credentials = new DefaultAzureCredential();
client = new DigitalTwinsClient(new Uri(_adtServiceUrl), credentials, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(_httpClient) });
}
catch (Exception ex)
{
log.LogError($"Exception: {ex.Message}");
client = null;
credentials = null;
return;
}
if (client != null)
{
if (eventGridEvent != null && eventGridEvent.Data != null)
{
JObject message = (JObject)JsonConvert.DeserializeObject(eventGridEvent.Data.ToString());
log.LogInformation($"A state change in the Helix Building DT has been detected");
//log.LogInformation($"Message: {message}");
string twinId = eventGridEvent.Subject.ToString();
log.LogInformation($"TwinId: {twinId}");
string modelId = message["data"]["modelId"].ToString();
log.LogInformation($"ModelId: {modelId}");
string smartPlugAggregatorId = null;
if (modelId.Contains("dtmi:digitaltwins:rec_3_3:core:logicalDevice:SmartPlug;1"))
{
log.LogInformation($"Updating ProjectSmartPlug state with new state from {twinId}");
// logicalDevice SmartPlug should always report to a logicalDevice SmartPlug Aggregator;
// go get the sourceId for the logicalDevice SmartPlug Aggregator the logicalDevice SmartPlug is related to
AsyncPageable<IncomingRelationship> smartPlugAggregatorList = client.GetIncomingRelationshipsAsync(twinId);
// get the sourceId (parentId)
await foreach (IncomingRelationship smartPlugAggregator in smartPlugAggregatorList)
if (smartPlugAggregator.RelationshipName == "observes")
{
smartPlugAggregatorId = smartPlugAggregator.SourceId;
}
log.LogInformation($"{smartPlugAggregatorId} observes to {twinId} for change in state during this iteration");
// if the parentId (SourceId) is null or empty, then something went wrong
if (string.IsNullOrEmpty(smartPlugAggregatorId))
{
log.LogError($"'SourceId' for observes relationship is missing from GetIncomingRelationships({twinId}) call. This should never happen.");
return;
}
AsyncPageable<BasicDigitalTwin> queryResponse = client.QueryAsync<BasicDigitalTwin>($"SELECT smartPlug FROM digitaltwins smartPlugAggregator JOIN smartPlug RELATED smartPlugAggregator.observes WHERE smartPlugAggregator.$dtId = '{smartPlugAggregatorId}'");
List<SmartPlug> SmartPlugList = new List<SmartPlug>();
// loop through each smartPlugSensor and build a list of smartPlugSensors
await foreach(BasicDigitalTwin twin in queryResponse)
{
JObject smartPlugPayload = (JObject)JsonConvert.DeserializeObject(twin.Contents["smartPlug"].ToString());
log.LogInformation($"Smart Plug {twin.Id} payload: {smartPlugPayload}");
SmartPlugList.Add(new SmartPlug() {
id = twin.Id,
ActiveEnergyWh = Convert.ToDouble(smartPlugPayload["ActiveEnergyWh"]),
ActivePowerW = Convert.ToDouble(smartPlugPayload["ActivePowerW"]),
ReActiveEnergyVARh = Convert.ToDouble(smartPlugPayload["ReActiveEnergyVARh"]),
ReActivePowerVAR = Convert.ToDouble(smartPlugPayload["ReActivePowerVAR"]),
});
}
// if no rooms, then something went wrong and method should exit
if (SmartPlugList.Count < 1)
{
log.LogError($"'roomList' is empty for floor ({smartPlugAggregatorId}). This should never happen.");
return;
}
// get the sum from the list of smartPlug Logical Devices
double sumActiveEnergyWh = SmartPlugList.Sum(x => x.ActiveEnergyWh);
log.LogInformation($"Sum ActiveEnergyWh : {sumActiveEnergyWh.ToString()}");
double sumActivePowerW = SmartPlugList.Sum(x => x.ActivePowerW);
log.LogInformation($"Sum ActivePowerW : {sumActivePowerW.ToString()}");
double sumReActiveEnergyVARh = SmartPlugList.Sum(x => x.ReActiveEnergyVARh);
log.LogInformation($"Sum ReActiveEnergyVARh : {sumReActiveEnergyVARh.ToString()}");
double sumReActivePowerVAR = SmartPlugList.Sum(x => x.ReActivePowerVAR);
log.LogInformation($"Sum ReActivePowerVAR : {sumReActivePowerVAR.ToString()}");
var updateTwinData = new JsonPatchDocument();
updateTwinData.AppendReplace("/ActiveEnergyWh", Math.Round(sumActiveEnergyWh, 2));
updateTwinData.AppendReplace("/ActivePowerW", Math.Round(sumActivePowerW, 2));
try
{
log.LogInformation(updateTwinData.ToString());
await client.UpdateDigitalTwinAsync(smartPlugAggregatorId, updateTwinData);
log.LogInformation("ProcessDTRoutedData (Done)...");
log.LogInformation(" ");
}
catch (Exception ex)
{
log.LogError($"Error: {ex.Message}");
}
return;
}
}
}
}
}
}

As your query is making a projection (i.e. specifying one or more columns you want to return, as opposed to doing a SELECT *), the query returns an AsyncPageable<IDictionary<string, BasicDigitalTwin>> (as opposed to AsyncPageable<BasicDigitalTwin> for wildcard queries).
You are looking at changing your code to:
AsyncPageable<IDictionary<string, BasicDigitalTwin>> queryResponse = client.QueryAsync<IDictionary<string, BasicDigitalTwin>>($"SELECT smartPlug FROM digitaltwins smartPlugAggregator JOIN smartPlug RELATED smartPlugAggregator.observes WHERE smartPlugAggregator.$dtId = '{smartPlugAggregatorId}'");
List<SmartPlug> SmartPlugList = new List<SmartPlug>();
// loop through each smartPlugSensor and build a list of smartPlugSensors
await foreach(IDictionary<string, BasicDigitalTwin> d in queryResponse)
{
if (d.ContainsKey("smartPlug")) {
SmartPlugList.Add(JsonConvert.DeserializeObject<SmartPlug>(d["smartPlug"].ToString()));
}
}
Please see https://learn.microsoft.com/en-us/azure/digital-twins/how-to-query-graph#run-queries-with-the-api for more info.

Related

Azure IoT Hub - How to get what's newer of desired vs reported property in device twins?

I'm using .NET Core 3 to read from device twins in an Azure IoT hub. I want to get property X and that property is, as always, both stored in the desired and reported properties. I want to get the one that's newer. This information is written in the metadata.
My question is, is this possible via just the IoT Hub Query Language or do I have to fetch from both desired and reported and check this out myself?
The Azure IoT Hub Query Language supports only the subset of the SQL statements, so the following example (device1 and twin property color) shows a workaround for missing a CASE statement:
query string to get the desired property as the lastUpdated:
querystring = $"SELECT devices.properties.desired.color FROM devices WHERE deviceId = 'device1' and devices.properties.desired.$metadata.color.$lastUpdated > devices.properties.reported.$metadata.color.$lastUpdated";
if the return value is empty, we have to make the second query to obtain a reported property such as:
querystring = $"SELECT devices.properties.reported.color FROM devices WHERE deviceId = 'device1' and devices.properties.reported.$metadata.color.$lastUpdated > devices.properties.desired.$metadata.color.$lastUpdated";
if the return value is still empty, there are missing our desired and/or reported property in the device twin or the deviceId is wrong.
The following code snippet shows an example of the above usage:
using Microsoft.Azure.Devices;
using System.Linq;
using System;
using System.Threading.Tasks;
namespace ConsoleApp3
{
class Program
{
static string connectionString = "*****";
static async Task Main(string[] args)
{
RegistryManager registryManager = RegistryManager.CreateFromConnectionString(connectionString);
string deviceId = "device1";
string propertyName = "color";
string querystring = $"SELECT devices.properties.desired.{propertyName} FROM devices WHERE deviceId = '{deviceId}' and devices.properties.desired.$metadata.{propertyName}.$lastUpdated > devices.properties.reported.$metadata.{propertyName}.$lastUpdated";
dynamic prop = null;
for (int ii = 0; ii < 2; ii++)
{
var query = registryManager.CreateQuery(querystring);
{
prop = (await query.GetNextAsJsonAsync())?.FirstOrDefault();
if (prop == null)
querystring = $"SELECT devices.properties.reported.{propertyName} FROM devices WHERE deviceId = '{deviceId}' and devices.properties.reported.$metadata.{propertyName}.$lastUpdated > devices.properties.desired.$metadata.{propertyName}.$lastUpdated";
else
break;
}
}
Console.WriteLine(prop ?? $"Not found property '{propertyName}' or device '{deviceId}'");
}
}
}
UPDATE:
In the case of multiple properties, we have to check each property individually by code in the fetched device twin entity. The following code snippet shows an example of this checking:
// multiple properties
querystring = $"SELECT devices.properties FROM devices WHERE deviceId='{deviceId}'";
var query2 = registryManager.CreateQuery(querystring);
JObject prop2 = JObject.Parse((await query2.GetNextAsJsonAsync())?.FirstOrDefault());
JToken desired = prop2.SelectToken("properties.desired");
JToken reported = prop2.SelectToken("properties.reported");
string pathLastUpdated = $"$metadata.{propertyName}.$lastUpdated";
var color = (DateTime)desired.SelectToken(pathLastUpdated) > (DateTime)reported.SelectToken(pathLastUpdated) ?
(string)desired[propertyName] : (string)reported[propertyName];
// more properties
Console.WriteLine(color);
also, you can create an extension class to simplify the code, see the following example:
public static class JObjectExtensions
{
public static T GetLastUpdated<T>(this JObject properties, string propertyName)
{
JToken desired = properties.SelectToken("properties.desired");
JToken reported = properties.SelectToken("properties.reported");
string pathLastUpdated = $"$metadata.{propertyName}.$lastUpdated";
return (DateTime)desired.SelectToken(pathLastUpdated) > (DateTime)reported.SelectToken(pathLastUpdated) ?
desired.SelectToken(propertyName).ToObject<T>() : reported.SelectToken(propertyName).ToObject<T>();
}
public static string GetLastUpdated(this JObject properties, string propertyName)
{
return GetLastUpdated<string>(properties, propertyName);
}
}
the following usage of the above extension shows how can be obtained any desired vs reported properties based on their lastUpdated timestamp:
color = prop2.GetLastUpdated(propertyName);
string color2 = prop2.GetLastUpdated("test.color");
var test = prop2.GetLastUpdated<JObject>("test");
string jsontext = prop2.GetLastUpdated<JObject>("test").ToString(Formatting.None);
var test2 = prop2.GetLastUpdated<Test>("test");
int counter = prop2.GetLastUpdated<int>("counter");
Note, that the exception is thrown in the case of property missing.

Trigger notification after Computer Vision OCR extraction is complete

I am exploring Microsoft Computer Vision's Read API (asyncBatchAnalyze) for extracting text from images. I found some sample code on Microsoft site to extract text from images asynchronously.It works in following way:
1) Submit image to asyncBatchAnalyze API.
2) This API accepts the request and returns a URI.
3) We need to poll this URI to get the extracted data.
Is there any way in which we can trigger some notification (like publishing an notification in AWS SQS or similar service) when asyncBatchAnalyze is done with image analysis?
public class MicrosoftOCRAsyncReadText {
private static final String SUBSCRIPTION_KEY = “key”;
private static final String ENDPOINT = "https://computervision.cognitiveservices.azure.com";
private static final String URI_BASE = ENDPOINT + "/vision/v2.1/read/core/asyncBatchAnalyze";
public static void main(String[] args) {
CloseableHttpClient httpTextClient = HttpClientBuilder.create().build();
CloseableHttpClient httpResultClient = HttpClientBuilder.create().build();;
try {
URIBuilder builder = new URIBuilder(URI_BASE);
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
String image = "/Users/xxxxx/Documents/img1.jpg";
File file = new File(image);
FileEntity reqEntity = new FileEntity(file);
request.setEntity(reqEntity);
HttpResponse response = httpTextClient.execute(request);
if (response.getStatusLine().getStatusCode() != 202) {
HttpEntity entity = response.getEntity();
String jsonString = EntityUtils.toString(entity);
JSONObject json = new JSONObject(jsonString);
System.out.println("Error:\n");
System.out.println(json.toString(2));
return;
}
String operationLocation = null;
Header[] responseHeaders = response.getAllHeaders();
for (Header header : responseHeaders) {
if (header.getName().equals("Operation-Location")) {
operationLocation = header.getValue();
break;
}
}
if (operationLocation == null) {
System.out.println("\nError retrieving Operation-Location.\nExiting.");
System.exit(1);
}
/* Wait for asyncBatchAnalyze to complete. In place of this wait, can we trigger any notification from Computer Vision when the extract text operation is complete?
*/
Thread.sleep(5000);
// Call the second REST API method and get the response.
HttpGet resultRequest = new HttpGet(operationLocation);
resultRequest.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
HttpResponse resultResponse = httpResultClient.execute(resultRequest);
HttpEntity responseEntity = resultResponse.getEntity();
if (responseEntity != null) {
String jsonString = EntityUtils.toString(responseEntity);
JSONObject json = new JSONObject(jsonString);
System.out.println(json.toString(2));
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
There is no notification / webhook mechanism on those asynchronous operations.
The only thing that I can see right know is to change the implementation you mentioned by using a while condition which is checking regularly if the result is there or not (and a mechanism to cancel waiting - based on maximum waiting time or number of retries).
See sample in Microsoft docs here, especially this part:
// If the first REST API method completes successfully, the second
// REST API method retrieves the text written in the image.
//
// Note: The response may not be immediately available. Text
// recognition is an asynchronous operation that can take a variable
// amount of time depending on the length of the text.
// You may need to wait or retry this operation.
//
// This example checks once per second for ten seconds.
string contentString;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
contentString = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1);
if (i == 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1)
{
Console.WriteLine("\nTimeout error.\n");
return;
}
// Display the JSON response.
Console.WriteLine("\nResponse:\n\n{0}\n",
JToken.Parse(contentString).ToString());

Azure cognitive Service - Speech sample code failed with authentication error

The program returns: CANCELED: Reason=Error ErrorDetails=WebSocket Upgrade failed with an authentication error (401). Please check for correct subscription key (or authorization token) and region name. SessionId: cbfcdf7f26304343a08de6c398652053
I'm using my free trial subscription key and westus region. This is the sample code found here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/quickstarts/speech-to-text-from-microphone?tabs=unity%2Cx-android%2Clinux%2Cjava-runtime&pivots=programming-language-csharp
using UnityEngine;
using UnityEngine.UI;
using Microsoft.CognitiveServices.Speech;
#if PLATFORM_ANDROID
using UnityEngine.Android;
#endif
#if PLATFORM_IOS
using UnityEngine.iOS;
using System.Collections;
#endif
public class Helloworld : MonoBehaviour
{
// Hook up the two properties below with a Text and Button object in your UI.
public Text outputText;
public Button startRecoButton;
private object threadLocker = new object();
private bool waitingForReco;
private string message;
private bool micPermissionGranted = false;
#if PLATFORM_ANDROID || PLATFORM_IOS
// Required to manifest microphone permission, cf.
// https://docs.unity3d.com/Manual/android-manifest.html
private Microphone mic;
#endif
public async void ButtonClick()
{
// Creates an instance of a speech config with specified subscription key and service region.
// Replace with your own subscription key and service region (e.g., "westus").
var config = SpeechConfig.FromSubscription("yourSubscriptionKey", "yourRegion");
// Make sure to dispose the recognizer after use!
using (var recognizer = new SpeechRecognizer(config))
{
lock (threadLocker)
{
waitingForReco = true;
}
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Checks result.
string newMessage = string.Empty;
if (result.Reason == ResultReason.RecognizedSpeech)
{
newMessage = result.Text;
}
else if (result.Reason == ResultReason.NoMatch)
{
newMessage = "NOMATCH: Speech could not be recognized.";
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(result);
newMessage = $"CANCELED: Reason={cancellation.Reason} ErrorDetails={cancellation.ErrorDetails}";
}
lock (threadLocker)
{
message = newMessage;
waitingForReco = false;
}
}
}
void Start()
{
if (outputText == null)
{
UnityEngine.Debug.LogError("outputText property is null! Assign a UI Text element to it.");
}
else if (startRecoButton == null)
{
message = "startRecoButton property is null! Assign a UI Button to it.";
UnityEngine.Debug.LogError(message);
}
else
{
// Continue with normal initialization, Text and Button objects are present.
#if PLATFORM_ANDROID
// Request to use the microphone, cf.
// https://docs.unity3d.com/Manual/android-RequestingPermissions.html
message = "Waiting for mic permission";
if (!Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
Permission.RequestUserPermission(Permission.Microphone);
}
#elif PLATFORM_IOS
if (!Application.HasUserAuthorization(UserAuthorization.Microphone))
{
Application.RequestUserAuthorization(UserAuthorization.Microphone);
}
#else
micPermissionGranted = true;
message = "Click button to recognize speech";
#endif
startRecoButton.onClick.AddListener(ButtonClick);
}
}
void Update()
{
#if PLATFORM_ANDROID
if (!micPermissionGranted && Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#elif PLATFORM_IOS
if (!micPermissionGranted && Application.HasUserAuthorization(UserAuthorization.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#endif
lock (threadLocker)
{
if (startRecoButton != null)
{
startRecoButton.interactable = !waitingForReco && micPermissionGranted;
}
if (outputText != null)
{
outputText.text = message;
}
}
}
}
The sample code you pasted above still has the placeholder values for region and subscription key. Just double checking that you did in fact replace those strings with your own subscription key and region? If that's true, can you please turn on logging, run the code again, and then provide the log? We can help diagnose from there...
To turn on logging, see https://aka.ms/speech/logging.

Counter triggered when node is deleted/created, but triggers also for changes of children

this is the structure of my firebase db:
/UserData
/DeviceMgmt
/Counters
/NumberOfAll:
/NumberOfSelected
/TotalDownloaded
...
/Devices
/pushId1
/uid
/toSelect=true (optional)
/downloaded
/lastDownload
/pushId2
/pushId2
...
My code:
exports.countNumberOfAllDevices = functions.database.ref('/UserData/DeviceMgmt/Devices/{pushId}').onWrite(
(change) => {
const collectionRef = change.after.ref;// /UserData/DeviceMgmt/Devices/{pushId}
const countRef = collectionRef.parent.parent.child('Counters/NumberOfAll');
let increment;
if (change.after.exists() && !change.before.exists()) {
increment = 1;
} else if (!change.after.exists() && change.before.exists()) {
increment = -1;
} else {
return null;
}
return countRef.transaction((current) => {
return (current || 0) + increment;
}).then(() => {
return console.log('counter /UserData/Counters/NumberOfAll updated.');
});
});
is based on functions-samples/child-count/ where pushed messages are replaced by my devices, but my device have children and messages in the example are childless.
My problem is:
1.
When my device is created in java one writing is object creation, another one is update of its child, so the NumberOfAll counter increments by 2.
At every changes within {pushId} device the function is run useless because that does not change the number of devices.
Should I replace countNumberOfAllDevices with
two functions:
incrementNumberOfAllDevices with onCreate (after creating device object no child is created later)
decrementNumberOfAllDevices with onDelete (before deleting device object no child is deleted earlier)
OR
go deeper with snapshot reference from
/Devices/{pushId}
to
/Devices/{pushId}/uid
when uid is never changed between creation and deletion ?
My Java code that triggers the Cloud Function:
testAddNewDeviceToDevices(String token) {
Device device = new Device( "Test", 0, token);
String deviceKey = dbRefDevices.push().getKey();
dbRefDevices.child(deviceKey).setValue(device)
.addOnFailureListener(new OnFailureListener() {
#Override public void onFailure(#NonNull Exception e) {
Log.e(TAG, "failed. Exception: ", e);
}
});
}
You should use onCreate and onDelete. They were introduced in version 1.0 of the SDK for exactly the the case you're describing. It will simplify your code drastically.
IMO, onWrite is not that useful. It's kind of a legacy trigger from pre-1.0 days.

Call action within another graph, how to pass the adapter

My goal is, from a given screen :
- Add lines to the Adjustments tab of the payment & application graph
- Release
I tried to do this :
override public void createLettering(List<ARRegister> lines)
{
string refNbr = "";
foreach (ARRegister line in lines)
{
if (line.DocType == "PMT") refNbr = line.RefNbr;
}
// Get the paymententry graph, and add the invoice
ARPaymentEntry graphPmt = PXGraph.CreateInstance<ARPaymentEntry>();
ARPayment pmt = PXSelect<ARPayment, Where<ARPayment.refNbr, Equal<Required<ARPayment.refNbr>>,
And<ARPayment.docType, Equal<Required<ARPayment.docType>>>>>
.Select(this,refNbr, "PMT");
graphPmt.Document.Current = pmt;
if (pmt == null) throw new PXException(Constantes.errNotFound);
//pmt.CuryOrigDocAmt = 0m;
//graphPmt.Document.Update(pmt);
ARAdjust adj = new ARAdjust();
foreach(ARRegister line in lines)
{
if (line.DocType == "INV")
{
adj = new ARAdjust();
adj.AdjdDocType = line.DocType;
adj.AdjdRefNbr = line.RefNbr;
graphPmt.Adjustments.Insert(adj);
}
}
PXAdapter adapter = new PXAdapter(new PXView(graphPmt,true, graphPmt.Document.View.BqlSelect));
graphPmt.Persist();
graphPmt.Release(adapter);
}
My problem is I think my adapter gets every single ARPayment in it and thus tries to release them all. (The output of this function is : long processing time and then tells me 'PaymentMethod can't be null', but the paymentMethod of my graphPmt.Document is not null when I check in debug).
so How do I pass a correct PXAdapter to the Release(PXAdapter adapter) method of the PaymentEntry graph, from another custom graph of mine ?
I would think you should be able to call the action such as...
graphPmt.release.Press();
I have not tested this but I recall doing something like this for other actions in the past.

Resources