I have a plugin that I want to run synchronously as a Post Process because if any of the requests it makes fail I'd like to roll everything back in one transaction. I've encountered an issue where if I try to run an AssociateRequest to associate another record to the record that triggered the plugin it returns a SQL timeout. I know that my code is correct because I'm using the same code to associate other records (that aren't firing the plugin) and they execute fine.
var referencedEntityRelationship = new Relationship(ReferencedEntityRelationshipName);
var referencedEntityEntities = new EntityReferenceCollection(new EntityReference[] { new EntityReference(ReferencedEntityLogicalName, new Guid(receivedRequest.ReferencingEntityId)) });
var rtn = new AssociateRequest() { RelatedEntities = referencedEntityEntities, Relationship = referencedEntityRelationship };
rtn.Target = new EntityReference() { Id = responseId, LogicalName = dataEntityLogicalName };
I know I can avoid the lock by running the plugin asynchronously. But if there is a failure for some reason I'm unable to roll back the initial request that fired the plugin and notify the user that they need to fix something. Any advice on how to execute associate requests within a plugin against the record that fired the plugin are appreciated.
Related
I am very new to NodeJS and trying to develop an application which acts as a scheduler that tries to fetch data from ELK and sends the processed data to another ELK. I am able to achieve the expected behaviour but after completing all the processes, scheduler job does not exists and wait for another scheduler job to come up.
Note: This scheduler runs every 3 minutes.
job.js
const self = module.exports = {
async schedule() {
if (process.env.SCHEDULER == "MinuteFrequency") {
var timenow = moment().seconds(0).milliseconds(0).valueOf();
var endtime = timenow - 60000;
var starttime = endtime - 60000 * 3;
//sendData is an async method
reports.sendData(starttime, endtime, "SCHEDULER");
}
}
}
I tried various solutions such Promise.allSettled(....., Promise.resolve(true), etc, but not able to fix this.
As per my requirement, I want the scheduler to complete and process and exit so that I can save some resources as I am planning to deploy the application using Kubernetes cronjobs.
When all your work is done, you can call process.exit() to cause your application to exit.
In this particular code, you may need to know when reports.sendData() is actually done before exiting. We would have to know what that code is and/or see the code to know how to know when it is done. Just because it's an async function doesn't mean it's written properly to return a promise that resolves when it's done. If you want further help, show us the code for sendData() and any code that it calls too.
I'm stuck on a problem of wiring some logic into a nodejs pg client, the main logic has two part, the first one is connect to postgres server and getting some notification, it is as the following:
var rules = {} // a rules object we are monitoring...
const pg_cli = new Client({
....
})
pg_cli.connect()
pg_cli.query('LISTEN zone_rules') // listen to the zone_rules channel
pg_cli.on('notification', msg => {
rules = msg.payload
})
This part is easy and run without any issue, now what I'm trying to implement is to have another function keeps monitoring the rules, and when an object is received and put into the rules, the function start accumulating the time the object stays in the rules (which may be deleted with another notification from pg server), and the monitoring function would send alert to another server if the duration of the object passed a certain time. I tried to wrote the code in the following style:
function check() {
// watch and time accumulating code...
process.nextTick(check)
}
check()
But I found the onevent code of getting notification then didn't have a chance to run! Does anybody have any idea about my problem. Or should I doing it in another way?
Thanks!!!
Well, I found change the nextTick to setImmediate solve the problem.
thanks for your help, I am new to firebase, I am designing an application with Node.js, what I want is that every time it detects changes in a document, a function is invoked that creates or updates the file system according to the new structure of data in the firebase document, everything works fine but the problem I have is that if the document is updated with 2 or more attributes the makeBotFileSystem function is invoked the same number of times which brings me problems since this can give me performance problems or file overwriting problems since what I do is generate or update multiple files.
I would like to see how the change can be expected but wait until all the information in the document is finished updating, not attribute by attribute, is there any way? this is my code:
let botRef = firebasebotservice.db.collection('bot');
botRef.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
if (change.type === 'modified') {
console.log('bot-changes ' + change.doc.id);
const botData = change.doc.data();
botData.botId = change.doc.id;
//HERE I CREATE OR UPDATE FILESYSTEM STRUCTURE, ACCORDING Data changes
fsbotservice.makeBotFileSystem(botData);
}
});
});
The onSnapshot function will notify you anytime a document changes. If property changes are commited one by one instead of updating the document all at once, then you will receive multiple snapshots.
One way to partially solve the multiple snapshot thing would be to change the code that updates the document to commit all property changes in a single operation so that you only receive one snapshot.
Nonetheless, you should design the function triggered by the snapshot so that it can handle multiple document changes without breaking. Given that document updates will happen no matter if by single/multiple property changes your code should be able to handle those. IMHO the problem is the filesystem update rather than how many snaphots are received
You should use docChanges() method like this:
db.collection("cities").onSnapshot(querySnapshot => {
let changes = querySnapshot.docChanges();
for (let change of changes) {
var data = change.doc.data();
console.log(data);
}
});
I'm trying to do some offline synchronisation from a Xamarin.iOS app. When I'm calling PullAsync on the IMobileServiceSyncTable the call never returns.
I've tried with a regular IMobileServiceTable, which seems to be working fine. The sync table seems to be the thing here that doesn't work for me
Code that doesn't work:
var client = new MobileServiceClient(ApiUrl);
var store = new MobileServiceSQLiteStore("syncstore.db");
store.DefineTable<Entity>();
await client.SyncContext.InitializeAsync(store);
table = client.GetSyncTable<Entity>();
try
{
await table.PullAsync("all", table.CreateQuery());
}
catch (Exception e)
{
Debug.WriteLine(e.StackTrace);
}
return await table.ToListAsync();
Code that works:
var client = new MobileServiceClient(configuration.BaseApiUrl);
return await table.ToListAsync();
Can anyone point out something that seems to be wrong? I do not get any exception or nothing that points me in any direction - it just never completes.
UPDATE 1:
Seen some other SO questions where people had a similar issue because they somewhere in their call stack didn't await, but instead did a Task.Result or Task.Wait(). However, I do await this call throughout my whole chain. Here's e.g. a unit test I've written which has the exact same behaviour as described above. Hangs, and never returns.
[Fact]
public async Task GetAllAsync_ReturnsData()
{
var result = await sut.GetAllAsync();
Assert.NotNull(result);
}
UPDATE 2:
I've been sniffing on the request send by the unit test. It seems that it hangs, because it keeps on doing the http-request over and over several hundereds of times and never quits that operation.
Finally! I've found the issue.
The problem was that the server returned an IEnumerable in the GetAll operation. Instead it should've been an IQueryable
The answer in this question pointed me in the right direction
IMobileServiceClient.PullAsync deadlock when trying to sync with Azure Mobile Services
The scenario I have is that I have a plugin which needs to run a whole bunch of AddMembersTeamRequest and RemoveMembersTeamRequest (around 2000 of each)
I am having trouble with the following code:
var executeMultipleRequest = new ExecuteMultipleRequest();
executeMultipleRequest.Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false };
var organizationRequestCollection = new OrganizationRequestCollection();
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
organizationRequestCollection.Add(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
executeMultipleRequest.Requests = organizationRequestCollection;
service.Execute(executeMultipleRequest);
However it doesn't matter how many requests are part of that ExecuteMultipleRequest as it just seems to freeze the process (I have tried having just one request in the ExecuteMultipleRequest collection)
But the following code seems to work fine:
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
service.Execute(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
As you can see, the problem with my code above is that it causes the process to execute around 2000+ requests.
Would anyone know why using the ExecuteMultipleRequest freezes the process entirely? (Even when there is only 1 add/remove team member request in the request collection)
I think I figured it out.
I think it was freezing on me because I was trying to remove a user from the default team of their current Business Unit.
For some reason the request wasn't erroring and instead just sat there.
However I think I should also point out that using an ExecuteMultipleRequest wasn't any faster than running multiple AddMembersTeamRequest.
Even a giant AssociateRequest wasn't any faster.