Response callback on a different thread - multithreading

Is the Callback supposed to be called on a different thread?
Using this code:
client.ExecuteAsync<List<IngredientDto>>(request, Response =>
{
textBox1.Text += Response.Data.Count;
});
I get a "InvalidOperationException":
"The calling thread cannot access this object because a different thread owns it."
Shouldn't the callback be on the UI thread, or am I wrong?

Actually, if you look at source code, you'll see
public virtual RestRequestAsyncHandle ExecuteAsync<T>(IRestRequest request, Action<IRestResponse<T>, RestRequestAsyncHandle> callback)
{
return ExecuteAsync(request, (response, asyncHandle) =>
{
IRestResponse<T> restResponse = response as RestResponse<T>;
if (response.ResponseStatus != ResponseStatus.Aborted)
{
restResponse = Deserialize<T>(request, response);
}
callback(restResponse, asyncHandle);//<--- this means that response & callback are executed at **same** thread.
});
}
That leads to:
You can't update ui objects from non-ui thread. In case of WPF you can use Dispatcher
client.ExecuteAsync<List<IngredientDto>>(request, Response =>
{
Dispatcher.Invoke((Action)() => {textBox1.Text += Response.Data.Count;});
});
In general case see syncronization context

Related

Insert bulk data in background in .Net Core

I'm using .Net Core 3.1 and I want to insert bulk data in the background, so I don't need my http request waiting for it "like fire and forget"
So I tried the following code
public object myFunction(){
Task.Factor.StartNew(() => {
_context.BulkInsertAsync(logs);
});
return data;
}
But nothing is happend, no data saved in database
is after my data returned my _context and logs will be null, so the process is filed?
or there is any another method to insert my data and don't wait for it
Note: the background task working if I replace insertion statment with sending mail or any other thing
Solved:
Thanks #Peter , I solved it using
Task.Run(async () => await _context.BulkInsertAsync(logs));
Task.Factory.StartNew or TaskFactory.StartNew cannot accept async delegates (Func<Task>), so you should use Task.Run instead which is indeed has an overload with Func < Task >. You would have to use await await or Unwrap against StartNew to get the same behaviour as with Run. Please read Stephen Toub's excellent blog post.
public object myFunction(){
Task.Run(async () => await _context.BulkInsertAsync(logs));
return data;
}
This is how I would implement it. This assumes the response is not important.
public NotImportantResult ProcessBulkData()
{
myFunctionAsync();
return new NotImportantResult()
}
private static async void myFunctionAsync()
{
await Task.Factory.StartNew(() => new MyBulkProccessor.BulkInsertAsync(logs));
}

How to stop async code from running Node.JS

I'm creating a program where I constantly run and stop async code, but I need a good way to stop the code.
Currently, I have tried to methods:
Method 1:
When a method is running, and another method is called to stop the first method, I start an infinite loop to stop that code from running and then remove the method from the queue(array)
I'm 100% sure that this is the worst way to accomplish it, and it works very buggy.
Code:
class test{
async Start(){
const response = await request(options);
if(stopped){
while(true){
await timeout(10)
}
}
}
}
Code 2:
var tests = [];
Start(){
const test = new test();
tests.push(test)
tests.Start();
}
Stop(){
tests.forEach((t, i) => {t.stopped = true;};
tests = [];
}
Method 2:
I load the different methods into Workers, and when I need to stop the code, I just terminate the Worker.
It always takes a lot of time(1 sec) to create the Worker, and therefore not the best way, since I need the code to run without 1-2 sec pauses.
Code:
const Worker = require("tiny-worker");
const code = new Worker(path.resolve(__dirname, "./Code/Code.js"))
Stopping:
code.terminate()
Is there any other way that I can stop async code?
The program contains Request using nodejs Request-promise module, so program is waiting for requests, it's hard to stop the code without one of the 2 methods.
Is there any other way that I can stop async code?
Keep in mind the basic of how Nodejs works. I think there is some misunderstanding here.
It execute the actual function in the actual context, if encounters an async operation the event loop will schedule it's execetution somewhere in the future. There is no way to remove that scheduled execution.
More info on event loop here.
In general for manage this kind of situations you shuold use flags or semaphores.
The program contains Request using nodejs Request-promise module, so program is waiting for requests, it's hard to stop the code
If you need to hard "stop the code" you can do something like
func stop() {
process.exit()
}
But if i'm getting it right, you're launching requests every x time, at some point you need to stop sending the request without managing the response.
You can't de-schedule the response managemente portion, but you can add some logic in it to (when it will be runned) check if the "request loop" has been stopped.
let loop_is_stopped = false
let sending_loop = null
func sendRequest() {
const response = await request(options) // "wait here"
// following lines are scheduled after the request promise is resolved
if (loop_is_stopped) {
return
}
// do something with the response
}
func start() {
sending_loop = setInterval(sendRequest, 1000)
}
func stop() {
loop_is_stopped = true
clearInterval(sending_loop)
}
module.exports = { start, stop }
We can use Promise.all without killing whole app (process.exit()), here is my example (you can use another trigger for calling controller.abort()):
const controller = new AbortController();
class Workflow {
static async startTask() {
await new Promise((res) => setTimeout(() => {
res(console.log('RESOLVE'))
}, 3000))
}
}
class ScheduleTask {
static async start() {
return await Promise.all([
new Promise((_res, rej) => { if (controller.signal.aborted) return rej('YAY') }),
Workflow.startTask()
])
}
}
setTimeout(() => {
controller.abort()
console.log("ABORTED!!!");
}, 1500)
const run = async () => {
try {
await ScheduleTask.start()
console.log("DONE")
} catch (err) {
console.log("ERROR", err.name)
}
}
run()
// ABORTED!!!
// RESOLVE
"DONE" will never be showen.
res will be complited
Maybe would be better to run your code as script with it's own process.pid and when we need to interrupt this functionality we can kill this process by pid in another place of your code process.kill.

Node.js child process fork return response -- Cannot set headers after they are sent to the client

situation:
have an function that does an expensive operation such as fetching a large query from mongodb, then performing a lot of parsing and analysis on the response. I have offloaded this expensive operation to a child process fork, and waiting for the worker to be done before sending response in order to not block the main event loop.
current implentation:
I have an API endpoint GET {{backend}}/api/missionHistory/flightSummary?days=90&token={{token}}
api entry point code:
missionReports.js
const cp = require('child_process');
//if reportChild is initailzed here, "Cant send headers after they were sent"
const reportChild = cp.fork('workers/reportWorker.js');
exports.flightSummary = function (req, res) {
let request = req.query;
// if initialized here, there is no error.
const reportChild = cp.fork('workers/reportWorker.js');
logger.debug(search_query);
let payload = {'task':'flight_summary', 'search_params': search_query};
reportChild.send(payload);
reportChild.on('message', function (msg) {
logger.info(msg);
if (msg.worker_status === 'completed')
{
return res.json(msg.data);
}
});
};
worker code:
reportWorker.js
process.on('message', function (msg) {
process.send({'worker_status': 'started'});
console.log(msg);
switch(msg.task)
{
case 'flight_summary':
findFlightHours(msg.search_params,function (response) {
logger.info('completed')
process.send({'worker_status': 'completed', 'data':response});
})
break;
}
});
scenario 1: reportChild (fork) is initialized at beginning of module definitions. api call works once, and returns correct data. on second call, it crashes with cannot send headers after theyve been sent. I stepped through the code, and it definitely only sends it once per api call.
scenario 2: if i initalize the reportChild inside of the api definition, it works perfectly every time. Why is that? Is the child forked process not killed unless its redefined? Is this standard implementation of child proceses?
This is my first attempt at threading in node.js, I am trying to move expensive operations off of the main event loop into different workers. Let me know what is best practice for this situation. Thanks.

How to work around amqplib's Channel#consume odd signature?

I am writing a worker that uses amqplib's Channel#consume method. I want this worker to wait for jobs and process them as soon as they appear in the queue.
I wrote my own module to abstract away ampqlib, here are the relevant functions for getting a connection, setting up the queue and consuming a message:
const getConnection = function(host) {
return amqp.connect(host);
};
const createChannel = function(conn) {
connection = conn;
return conn.createConfirmChannel();
};
const assertQueue = function(channel, queue) {
return channel.assertQueue(queue);
};
const consume = Promise.method(function(channel, queue, processor) {
processor = processor || function(msg) { if (msg) Promise.resolve(msg); };
return channel.consume(queue, processor)
});
const setupQueue = Promise.method(function setupQueue(queue) {
const amqp_host = 'amqp://' + ((host || process.env.AMQP_HOST) || 'localhost');
return getConnection(amqp_host)
.then(conn => createChannel(conn)) // -> returns a `Channel` object
.tap(channel => assertQueue(channel, queue));
});
consumeJob: Promise.method(function consumeJob(queue) {
return setupQueue(queue)
.then(channel => consume(channel, queue))
});
My problem is with Channel#consume's odd signature. From http://www.squaremobius.net/amqp.node/channel_api.html#channel_consume:
#consume(queue, function(msg) {...}, [options, [function(err, ok) {...}]])
The callback is not where the magic happens, the message's processing should actually go in the second argument and that breaks the flow of promises.
This is how I planned on using it:
return queueManager.consumeJob(queue)
.then(msg => {
// do some processing
});
But it doesn't work. If there are no messages in the queue, the promise is rejected and then if a message is dropped in the queue nothing happens. If there is a message, only one message is processed and then the worker stalls because it exited the "processor" function from the Channel#consume call.
How should I go about it? I want to keep the queueManager abstraction so my code is easier to reason about but I don't know how to do it... Any pointers?
As #idbehold said, Promises can only be resolved once. If you want to process messages as they come in, there is no other way than to use this function. Channel#get will only check the queue once and then return; it wouldn't work for a scenario where you need a worker.
just as an option. You can present your application as a stream of some messages(or events). There is a library for this http://highlandjs.org/#examples
Your code should look like this(it isn`t a finished sample, but I hope it illustrates the idea):
let messageStream = _((push, next) => {
consume(queue, (msg) => {
push(null, msg)
})
)
// now you can operate with your stream in functional style
message.map((msg) => msg + 'some value').each((msg) => // do something with msg)
This approach provides you a lot of primitives for synchronization and transformation
http://highlandjs.org/#examples

Javascript wait for request to process in a for loop

In my chrome extension i was checking for a function which can stop my for loop from processing till it gets a response from content scripts. Sharing the sample code below
function abc() {
chrome.tabs.query({'status': 'complete'}, function(tabArray) {
for (var i = 0, tab; tab = tabArray[i]; i++) {
var currentUrl = tab.url;
var tabId = tab.id;
if (currentUrl.match(otherthing)) {
chrome.tabs.sendRequest(tabId, {'type': 'getrequiredthing'},
function(response) {
if (response.isrequiredthind) {
callfunction(tabId);
}
}
);
}
}
});
}
Here when i get the matching url in else if i'm sending a request to the page for getting some info, if my info is positive i need to callfunction. But here in the for loop tabId is iterating very fastly and even if the response is positive it is calling the callfunction with next(or next) tabId.
Can you give your opinions on solving this keep waiting the for loop this response is received.
Thanks
The problem is that the third argument to sendRequest does not block on the request being ready. By design, JavaScript almost never blocks. This is a Good Thing. Instead, it uses an "event-driven" model.
Another problem is due to lexical scoping: When callfunction is called, tabId has the most recent value, not the value when sendRequest was called. To get around this, you need to create a separate scope for each loop iteration e.g.
for (...) {
var tabId = ...;
if (...) {
(function (localTabId) {
chrome.tabs.SendRequest(..., function (response) {
if (response.isrequiredthind) {
callfunction(localTabId);
}
}
})(tabId);
}
}

Resources