How to make a httpsrequest 'Get' in apex and then update a record in Salesforce - get

Overview:
We have a third party what hosts a text value at a given endpoint. Using a 'Get' request to a url where we also pass a key and parameters returns a string values (of decimal numbers and a space).
I created some apex code, including #InvocableMethod, so I could all the apex from a flow where I pass in the URL, and then the text is returned to the flow. I then go on to update a record.
Here is the Method,
there is also a class, FR_Amount_Variables , storing the URL and String #InvocableVariable values.
public class FR_Amount_Sync {
#InvocableMethod(label='FR Amount Raised Get')
public static List<FR_Amount_Variables>getFRamount (List<FR_Amount_Variables> inputURL) {
FR_Amount_Variables amtvar = new FR_Amount_Variables();
List<FR_Amount_Variables> getFRamount = new List<FR_Amount_Variables>();
string endpoint = inputURL[0].URL;
Http http = new Http();
HttpRequest request = new HttpRequest();
request.setEndpoint(endpoint);
request.setMethod('GET');
HttpResponse response = http.send(request);
string Amounts= response.getBody();
Amounts= Amounts.replaceAll( '\\s+', '');
if(String.isEmpty(Amounts)){
Boolean isEmpty = true;
Amounts = '0.00';
}
decimal amountss = decimal.valueOf(Amounts);
amtvar.amount = amountss;
getFRamount.add(amtvar);
return getFRamount;
}
}
The image of the flow can be seen below
Update Flow
Issue:
When I run the flow in Debug mode, set the 3 input variables and run, the flow executes the apex and updates the specified record correctly.
Likewise if I preset the flow's input variables (add a default value), and the just run the flow, the apex and record updates succeed with the record being update with the correct value from the 3rd party.
The issue is when I try to automatically run the flow, either by Process Builder, or by Mass Action Scheduler, I receive system exception errors.
An Apex error occurred:
System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out
and
An Apex error occurred: System.CalloutException: Callout loop not allowed
respectively.
I was wondering if there is anyway to trigger a flow that doesn't trigger an error. Otherwise is there a way I can make a httprequest 'get' callout and then update a record with the received record.

We cannot do DML before the Callout in the same transaction.
DML can be done after the Callout.
So, the best practice is to do Callout using future method. In this way, the flow will handle the DML operations.
For example, check this link -
https://www.infallibletechie.com/2020/04/how-to-do-callout-from-flow-in.html

Related

Handling of etags in batch request using SAP Cloud SDK

I am trying to carry out a batch request including a create, update and a delete (all are different salesorders). As per this question here which deals with something similar, I have done a get for the items I want to update and delete before I add them to the batch request. I am using the SalesOrder.builder() to prepare the SalesOrder I want to create.
final ErpHttpDestination destination = DestinationAccessor.getDestination(DESTINATION_NAME)
.asHttp().decorate(DefaultErpHttpDestination::new);
final SalesOrderItem salesOrderItem1 = SalesOrderItem.builder().material(material)
.requestedQuantityUnit(requestedQuantityUnit).build();
final SalesOrder salesOrder1 = SalesOrder.builder().distributionChannel(distributionChannel)
.salesOrderType(salesOrderType).salesOrganization(salesOrganization)
.organizationDivision(organizationDivision).soldToParty(soldToParty)
.item(salesOrderItem1).build();
final SalesOrder orderToUpdate = new GetSingleSalesOrderCommand(orderToUpdateID, destination,
new DefaultSalesOrderService()).execute();
orderToUpdate.setSoldToParty(updateSoldToParty);
final SalesOrder orderToDelete = new GetSingleSalesOrderCommand(orderToDeleteID, destination,
new DefaultSalesOrderService()).execute();
SalesOrderServiceBatch service = new DefaultSalesOrderServiceBatch(
new DefaultSalesOrderService());
BatchResponse bRes = service.beginChangeSet().createSalesOrder(salesOrder1).updateSalesOrder(orderToUpdate)
.deleteSalesOrder(orderToDelete).endChangeSet().execute(destination);
I am then logging the BatchResponse and see I am getting a Batch Response Failure:
eTag handling not supported for http method 'POST'
I have searched for this error but can't find any resolution to it. Any ideas?
Thanks.
UPDATE: Increasing the logging to DEBUG I can see the batch request that is being sent and can see that there is an if-match header being added to the create request, which doesn't make sense as it can't match something that doesn't exist yet.
"msg":"--batch_123\r\nContent-Type: multipart/mixed;
boundary=changeset_(changeset number)\r\n\r\n--
changeset_(changeset number)\r\nContent-Type:
application/http\r\nContent-Transfer-Encoding: binary\r\n\r\nPOST
/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder HTTP/1.1\r\nContent-
Length:
193\r\nIf-Match: W/\"datetimeoffset'2020-05-
01T11%3A51%3A16.8631720Z'\"\r\nAccept:
application/json;odata=verbose\r\nContent-Type:......
The I get the error:
Inner Error:
"msg":"batch
responseFailure(com.sap.cloud.sdk.odatav2.connectivity.ODataException:
null: <?xml version=\"1.0\" encoding=\"utf-8\"?><error
xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">
<code>/IWFND/CM_MGW/537</code><message xml:lang=\"en\">eTag handling not
supported for http method 'POST'</message><innererror>...
However, what does work is if I wrap each request in its own changeset e.g.
service
.beginChangeSet().createSalesOrder(order).endChangeSet()
.beginChangeSet().updateSalesOrder(orderToUpdate).endChangeSet()
.beginChangeSet().deleteSalesOrder(orderToDelete).endChangeSet()
.execute(destination);
Edit:
This is fixed as of version 3.25.0.
Initial Answer:
This seems to be a bug. I was able to reproduce this with a different service and the behaviour is the same: The if-match header is incorrectly applied to the POST operation as well.
When debugging it seems like the request is build up correctly with the header only being present on update and delete. However, it seems that when the batch request is serialised to JSON it gets added to all requests.
So until this is fixed the workaround is isolating these operations via change sets, as you already pointed out.
Looks like eTag handling is not supported for your endpoint.
Now you can do the following to omit eTag headers:
orderToUpdate.setVersionIdentifier(null);
orderToDelete.setVersionIdentifier(null);
However I'm not sure how 'POST' fits the error description, because update uses PATCH and delete uses DELETE. The only POST that I expect would be coming from create. But we do not add headers for entity version identifiers (eTag) in OData create operation. If the same error still comes up, please try again without running createSalesOrder(salesOrder1).

How to verify if values are updated or not by API using groovy in soap ui

I am using soapui and groovy for api automation and assertion.
Have one API which updates user profile data. i.e update username,firstname,lastname etc.
What is best way to verify that if data is updated or not after run update api. In groovy is there any way by which I can store previous data from API response then run update api and again check response and finally compare previous response and latest one?
What I have tried it comparing values which I am going to sent via API and values which API returns. If both equal then assume that values update. But this seems not perfect way to check update function.
Define a test case level custom property, say DEPARTMENT_NAME and value as needed.
Add a Script Assertion for the same request test step with below script:
//Check if the response is received
assert context.response, 'Response is null or empty'
//Parse text to json
def json = new groovy.json.JsonSlurper().parseText(context.response)
log.info "Department name from response ${json.data.name}"
assert json.data.name == context.expand('${#TestCase#DEPARTMENT_NAME}'), 'Department name is not matched'
You may also edit the request, and add the value as ${#TestCase#DEPARTMENT_NAME} instead of current fixed value XAPIAS Department. So that you can just change the value of department name at test case level property, the same is sent in the request and the same is verified in the response.
Use JDBC teststep to run query directly into database:
Use Xpath assertion to Validate your Update API
Assertion 1
/Results/ResultSet[1]/Row[1]/FirstName
Expected result
Updated FirstName
Assertion 2
/Results/ResultSet[1]/Row[1]/LastName
Updated Last Name
In our project we do it in this way:
First we execute all the APIs.
Then we Validate all the new/updated data in database in DB Validation testcase.
Works well in highly integrated environment as well.

How to pass information between spring-integration components?

In spring-batch, data can be passed between various steps via ExecutionContext. You can set the details in one step and retrieve in the next. Do we have anything of this sort in spring-integration ?
My use case is that I have to pick up a file from ftp location, then split it based on certain business logic and then process them. Depending on the file names client id would be derived. This client id would be used in splitter, service activator and aggregator components.
From my newbie level of expertise I have in spring, I could not find anything which help me share state for a particular run.I wanted to know if spring-integration provides this state sharing context in some way.
Please let me know if there is a way to do in spring-context.
In Spring Integration applications there is no single ExecutionContext for state sharing. Instead, as Gary Russel mentioned, each message carries all the information within its payload or its headers.
If you use Spring Integration Java DSL and want to transport the clientId by message header you can use enrichHeader transformer. Being supplied with a HeaderEnricherSpec, it can accept a function which returns dynamically determined value for the specified header. As of your use case this might look like:
return IntegrationFlows
.from(/*ftp source*/)
.enrichHeaders(e -> e.headerFunction("clientId", this::deriveClientId))
./*split, aggregate, etc the file according to clientId*/
, where deriveClientId method might be a sort of:
private String deriveClientId(Message<File> fileMessage) {
String fileName = fileMessage.getHeaders().get(FileHeaders.FILENAME, String.class);
String clientId = /*some other logic for deriving clientId from*/fileName;
return clientId;
}
(FILENAME header is provided by FTP message source)
When you need to access the clientId header somewhere in the downstream flow you can do it the same way as file name mentioned above:
String clientId = message.getHeaders().get("clientId", String.class);
But make sure that the message still contains such header as it could have been lost somewhere among intermediate flow items. This is likely to happen if at some point you construct a message manually and send it further. In order not to loose any headers from the preceding message you can copy them during the building:
Message<PayloadType> newMessage = MessageBuilder
.withPayload(payloadValue)
.copyHeaders(precedingMessage.getHeaders())
.build();
Please note that message headers are immutable in Spring Integration. It means you can't just add or change a header of the existing message. You should create a new message or use HeaderEnricher for that purpose. Examples of both approaches are presented above.
Typically you convey information between components in the message payload itself, or often via message headers - see Message Construction and Header Enricher

ASP.NET Identity transactions and errors

I've created a custom UserStore class in an ASP.NET MVC 5 web site to allow custom reading/writing to SQL Server, and I have a couple of questions...
When 'var result = await UserManager.CreateAsync(user, model.Password);' is called from the controller 'public Task FindByNameAsync(string userName)' in UserStore is called first and then 'public Task CreateAsync(TUser user)' is called. What stops a second account of the same username from being created at the same time?
How can I raise errors in 'public Task CreateAsync(TUser user)' that results in 'result.Succeeded == false' and 'result errors' in the controller.
For #1, we use ValidateEntity on IdentityDbContext to ensure that user names are unique. And in the 2.0 release we are adding a unique index on user names as well which should guarantee they are unique.
For #2, Stores are expected to throw exceptions when operations fail, the basic CRUD operations are not expected to really fail normally. If you have special behavior that your store wants to expose, you either override or implement your own variant of CreateAsync that returns the appropriate IdentityResult with the error string you desire.

Why is IRequiresHttpRequest lazily loaded?

I'm trying to set up a set of rules that execute under one of 3 conditions:
HttpRequest.HttpMethod = "Put"
HttpRequest.HttpMethod = "Post"
HttpRequest == null
This last one will occur in the case where I'm trying to validate a POCO from a windows client. (see my other question here re-using ServiceStack validation in Winforms offline client).
I was hoping to create a an if() statement around my RuleFor()s in my validator, but HttpRequest is always null at this point (the documentation warns that it is lazily-loaded and available only in the validation delegates).
The only other solution I've come up with is adding a .When() to every one of my rules that makes this check, but these seems like way too much repetitive code.
Is there a common code point where I can check the HttpRequest object to determine if it's null, or the verb is put/post?

Resources