AadServiceTemporarilyUnavailable. Logic Apps Error - azure

I am trying to automate data saving email attachments using Azure Logic Apps. But I am getting the above error. Would anyone please help me how to solve this?

The following statements are based on our research, please consider if the data is useful.
There might be a situation where Logic App instances run at the same time, or in parallel. Make sure you enable Concurrency Control.
While if you are still receiving the same then we can able to do the error handling using Azure Monitor which catches most things like malformed data.
You can also check Handling error codes in your application which lists some common errors and can Troubleshoot and diagnose workflow failures accordingly.

Related

Policycenter does not send account updates to billincenter

I'm developing a Policycenter integration with BillingCenter. I did the initial step-by-step according to the documentation, but when changing some field of an account in PolicyCenter, the synchronization is not performed as in BillingCenter.
I need to sync PolicyCenter account updates with BillingCenter but I couldn't find anything specific in the documentation
To resolve the issue with the PolicyCenter integration with BillingCenter not synchronizing properly, you can try the following steps:
Check the configuration settings: Ensure that the integration
settings are configured correctly, including the mapping of fields
between PolicyCenter and BillingCenter.
Verify data consistency: Make sure that the data in PolicyCenter and
BillingCenter is consistent and meets the required format. This
includes verifying the data types, length, and values of the fields.
Monitor system logs: Check the logs of both PolicyCenter and
BillingCenter for any error messages or exceptions related to the
integration. This can help you identify any issues with the data or
communication between the two systems.
Test the integration: Run a test of the integration to see if it is
working properly. This can help you identify any issues with the
integration or data flow.
Contact support: If you are unable to resolve the issue, reach out
to the Guidewire support team for assistance. They can provide
guidance on how to troubleshoot the issue and resolve any issues
that may be causing the synchronization problem.
If you need further guidance on any of these steps or have any other questions, you can consult the Guidewire documentation or reach out to the support team for help.
Try preUpdate (DemoPreUpdate.gs)

How to run long running synchronous operation in nodejs

I am writing payroll management web application in nodejs for my organisation. In many cases application shall involve cpu intensive mathematical calculation for calculating the figures and that too with many users trying to do this simulatenously.
If i plainly write the logic (setting aside the fact that i already did my best from algorithm and data structure point of view to contain the complexity) it will run synchronously blocking the event loop and make request, response slow.
How to resolve this scenario? What are the possible options to do this asynchronously? I also want to mention that this calculation stuff can be let to run in the background and later i can choose to tell user via notification about the status. I have searched for the solution all over this places and i found some solutions but only in theory & i haven't tested them all by implementing. Mentioning below:
Clustering the node server
Use worker threads
Use an alternate server and do some load balancing.
Use a message queue and couple it with worker thread to do backgound tasks.
Can someone suggest me some tried and battle tested advice on this scenario? and also some tutorial links associated with that.
You might wanna try web workers,easy to use and documented.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers

Best practices to handle errors in GCP Dataflow pipelines

I have a GCP Datapipeline running and I am wondering what are the best ways to handle errors. The pipeline looks like this
read_from_pubsub --> business_logic_ParDo() --> write_to_bigquery
While testing, I have noticed that ParDo being stuck. Though I was able to resolve the issue but i noticed that it made my pipeline stuck, So what should be the best approach to handle this?
What should my ParDo function do if the business logic fails? I don't want to write to big_query partial data.
Can't think of any other error scenarios.
I would recommend the dead letter pattern to handle unrecoverable errors in business logic. As for aborting stuck records, you could try something like func-timeout, but that could be expensive to use on every element.

How to use application insights for capturing iot edge device logs?

I am trying to understand the use of application insights for capturing the module logs and considering appinsights as a potential option.
I am keen on understanding how would the appinsights work considering there would be multiple devices each running the same modules where modules are configured to send log data to appinsights. The type of data I want to capture are container logs which are currently being sent to stderr/stdout streams.I am expecting this to work on windows devices , hence the logspout project may not be useful here (https://github.com/veyalla/logspout-loganalytics) but i want to do something similar.
I am trying to figure out a design where module logs from multiple edge devices can be captured using appinsights. It would be immensely useful for me to know if appinisghts is really suited for the problem I am trying to solve and how can it be used for multiple devices.
I'm not aware of a good/secure solution for Windows containers that does a continuous push of module logs to log analytics.
Since the built-in log pull via edgeAgent is experimental, we might change the API or make some modifications but we're unlikely to pull the feature entirely without an equivalent alternative.

Sending Actor Runtime ETW logs to ElasticSearch

Currently I'm trying out ElasticSearch as a logging solution to pump out ETW events to.
I've followed this tutorial (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostic-how-to-use-elasticsearch), and this is working great for my own custom ActorEventSource logs, but I haven't found a way to log the Actor Runtime events (ActorMethodStart, ActorMethodStop... etc) using the "in-process" trace capturing.
1) Is this possible using the in-process trace capturing?
I'm also considering using the "out-of-process" trace capturing, which to me seems like the preferable way of doing things in our situation, as we already have WAD setup which includes all of the Actor Runtime events already. Not to mention the potential performance impact / other side-effects of running the ElasticSearchListener inside of our Actor Services.
2) I'm not quite sure how to implement this.The https://github.com/Azure/azure-diagnostics-tools/tree/master/ES-MultiNode project doesn't seem to include Logstash, so i'm assuming I would need a template such as this one: https://github.com/Azure/azure-diagnostics-tools/tree/master/ELK-Semantic-Logging/ELK/AzureRM/elk-simple-on-ubuntu, otherwise I would need to modify the ES-MultiNode project to install Logstash as well? Just trying to get an idea if I'm going down the right path with regards to this.
If there's any other suggestions in terms of logging, I'd love to hear them!

Resources