I am using angular 5 with pouchdb. When I save a user I need to show it immediately in the users list. Meanwhile a background thread must geolocate the users city and update its coordinates for that user.
The geolocation calculation takes a second or two to load that is why I am thinking of running in a background thread.
I looked into angular service worker, But I think its for getting files for offline.
I also looked angular cli web worker, But It did not mention how to call a background service and get a value back to main thread.
Is there a clear way to run a background thread in angular 5?
Using rxjs you can define and create an observable that do what you want :
myObservable = Observable.create(function (observer) {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(position => observer.next(position));
}
});
Then use it to get the desired value asynchronously :
myObservable.subscribe(pushedValue => console.log(pushedValue));
Here is a running example
This is not real multithread (not needed in this case in my opinion), for that you need to look more to web workers.
For Aangular 5 or under, I am using setTimout()
var _setTimeoutHandler = setTimout(() => { myfunction(){}})
Make sure you clear variable _setTimeoutHandler before quit to avoid resource leaking
I am also searching better way.
Related
This is an update to a question I had asked previously but wasn't thinking straight when I asked the question (I was taking a very backwards approach to my solution). I'm currently working with Puppeteer and I'm trying to create an association between a particular task and a puppeteer browser instance. Right now I am currently getting the browser's context id using:
const {browserContextId} = await browser._connection.send('Target.createBrowserContext');
and I am storing it in a file along with other details related to the task. My goal is to somehow be able to close that browser instance using the stored context id. I've given this issue a read on the Puppeteer's GitHub hoping that it would help in some way but it seems that it's not super helpful to me as its not really related to what I'm doing.
The real issue is that I am going to be spawning browser instances in one file and attempting to close them in another, otherwise this wouldn't be an issue at all. Right now the only thing I've been able to do is just spawn another browser instance using the context id (pretty much useless for my task) and have had no luck in closing it or disposing it.
Any help would be greatly appreciated, thanks!
P.S. If there is a better approach to solving this association issue I'm all ears!
Turns out I was thinking about it way too much and trying to make it too complex. For anyone trying to do something similar I'll leave my solution here for you.
Instead of using the browser's context id I found it much easier to just grab the browser's process id (pid). From there I could kill the process using different strategies based on where I was running the close command.
For Node.js:
// Lets say for example we're instantiating our browser like this
const browser = await puppeteer.launch({ headless: false });
// You can simply run this to get the browser's pid
const browserPID = browser.process().pid
// Then you can either store it for use later
fs.writeFile(file, JSON.stringify(jsondata, null, 4), (err) => {
if (err) console.log(err);
})
// Or just kill the process
process.kill(browserPID);
Keep in mind that if you are storing the PID you need to read the file and parse the data to pass into the process.kill command.
For React via Electron
// Require process from the electron file
const process = window.require('process');
// Then same process as before
process.kill(yourbrowserPID);
Hopefully my stupidity can help someone in the future if they are trying to do something similar. It was way easier than I was making it out to be.
My watchOS app uses core data for local storage. Saving the managed context is done in background:
var backgroundContext = persistentContainer.newBackgroundContext()
//…
backgroundContext.perform {
//…
let saveError = self.saveManagedContext(managedContext: self.backgroundContext)
completion(saveError)
}
//…
func saveManagedContext(managedContext: NSManagedObjectContext) -> Error? {
if !managedContext.hasChanges { return nil }
do {
try managedContext.save()
return nil
} catch let error as NSError {
return error
}
}
Very rarely, my context is not saved. One reason I can think of is the following:
After my data are changed, I initiate a background core data context save operation.
But before the background task starts, the watch extension is put by the user into background, and is then terminated by watchOS.
This probably also prevents the core data background save to execute.
My questions are:
- Is this scenario possible?
- If so, what would be the correct handling of a core data background context save?
PS: On the iOS side, I do the same, but here it is possible to request additional background processing time using
var bgTask: UIBackgroundTaskIdentifier = application.beginBackgroundTask(expirationHandler: {
//…
application.endBackgroundTask(bgTask)
}
By now, I think I can answer my question:
If the watch extension is put by the user into background, the extension delegate calls applicationDidEnterBackground(). The docs say:
The system typically suspends your app shortly after this method
returns; therefore, you should not call any asynchronous methods from
your applicationDidEnterBackground() implementation. Asynchronous
methods may not be able to complete before the app is suspended.
I think this also applies to background tasks that have been initiated before, so it is actually possible that a core data background save does not complete.
Thus, the core data save should be done on the main thread. My current solution is the following:
My background context is no longer set up using persistentContainer.newBackgroundContext(), since such a context is connected directly to the persistentContainer, and when this context is saved, changes are written to the persistent store, which may take relatively long. Instead, I now set up the background context by
var backgroundContext = NSManagedObjectContext.init(concurrencyType: .privateQueueConcurrencyType)
and set its parent property as
backgroundContext.parent = container.viewContext
where container is the persistent container. Now, when the background context is saved, it is not written to the persistent store, but to its parent, the view content that is handled by the main thread. Since this saving is only done in memory, it is pretty fast.
Additionally, in applicationDidEnterBackground() of the extension delegate, I save the view context. Since this is done on the main thread, The docs say:
The applicationDidEnterBackground() method is your last chance to
perform any cleanup before the app is terminated.
In normal circumstances, enough time should be provided by watchOS. If not, other docs say:
If needed, you can request additional background execution time by
calling the ProcessInfo class’s
performExpiringActivity(withReason:using:) method.
This is probably equivalent to setting up a background task in iOS as shown in my question.
Hope this helps somebody!
I am trying to use titanium execution contexts to produce parallel code execution between the main application context and others. I am using CreateWindow with a url property refers to a .js file inside "lib" folder. But by logging the execution on both iOS and Android devices it seems that different contexts are executed on the app main thread, no parallelism here.
My new context trigger inside my Alloy controller:
var win2 = Ti.UI.createWindow({
title: 'New Window',
url: 'thread.js',
backgroundColor:'#fff'
});
win2.open();
Ti.API.log('after open');
My thread.js contents:
Ti.API.log("this is the new context");
Ti.App.fireEvent("go" , {});
while(true)
{
Ti.API.log('second context');
}
This while loop apparently blocks the main context (my Alloy controller) waiting it to exit.
Any suggestions of how can I execute some code (mainly heavy sqlite db access) in background so that the UI be responsive? (Web workers are not a choice for me).
You could try to achieve the wanted behaviour with a setInterval() or setTimeout() method.
setInterval()[source]:
function myFunc() {
//your code
}
//set the interval
setInterval(myFunc,2000) //this will run the function for every 2 sec.
Another suggested method would be to fire a custom event when you need the background behavior since it is processed in its own thread. This is also suggested in the official documentation.
AFAIK, titanium is single threaded, because JavaScript is single threaded. You can get parallel execution with native modules, but you'll have to code that yourself for each platform.
Another option is to use web workers, but I consider that to be a hack.
I am implementing a very basic website using nodejs, and expressjs framework. The idea is that the user enters the website, click on a button that will trigger a cpu-intensive task in the server, and get a message on the browser upon the completion of the task.
The confusing part for me is how to accomplish this task for each user without blocking the event-loop, thus, no user has to wait for another to finish. Also, how to use a new instance of the used resources(objects, variables..) for each user.
After playing and reading around, I have come across child-process. Basically, I thought of forking a new child_process for each user so that whatever the time the sorting takes, it won't block my event-loop. However, I am still not sure if this is the best thing to do for such a scenario.
Now, I have done what I wanted to but with only single user, however when trying to start another user, things become messy and variables are shared. I know that I should not use global variables declared in the module, but what could be another way to make variables shared among functions within a single module yet they are different for each user!?
I know that the question may sound very basic, but I kinda miss the idea of how does node js serve different users with new variables, objects that are associated with each individual user.
In short, my questions are:
1- how does node serve multiple users simultaneously?
2- when and how should I resort to forking or executing a new child-process under the hood, and is it for each user or based on my # cores in cpu
3- how to separate resources in my application for each user such that each user has his own counters, emails and other objects and variables.
4- when do I need or I have to kill my child process.
Thanks in advance.
CODE:
var cp = require('child_process');
var child= cp.fork('./routes/mysort-module');
exports.user=function(req,res){
// child = new cp.fork('./routes/mysort-module'); // Should I make new child for each user?
child.on('message',function(m){ res.send('DONE');}
child.send('START PROCESS'); // Forking a cpu-intensive task in mysort-module.js
}
IN MY SORTING MODULE:
var variables = require(...);
//GLOBAL VARIABLES to be shared among different functions in this module
process.on('message',function(m){
//sort cpu-intensive task
process.send('DONE');
});
// OTHER FUNCTIONS in THE MODULE that make use of global variables.
You should try to split up your question. However, I hope this answers it.
Question 1: A global variable is not limited to request scope. That's a part of Node's definition for a global and it doesn't make sense to enforce this somehow. You shouldn't use globals at all.
The request scope is given by the HTTP module:
http.createServer(function(req, res) {
var a = 1;
// req, res, and a are in request scope for your user-associated response
});
Eliminating globals shouldn't be that hard: If module A and B share a global G and module C calls A.doThis() and B.doThat(), change C to call A.doThis(G) and B.doThat(G) instead. Do this for all occurrences of G and reduce its scope to local or request.
Additionally, have a look for "Sessions", if you need a scope coverig multiple requests from one client.
Question 2: Start the child process inside the request handler:
exports.user = function(req,res){
child = new cp.fork('./routes/mysort-module');
Question 3: See question 1?
Question 4: After the process returned the calculated results.
process.on('DONE', function(result) {
process.exit();
// go on and send result to the client somehow...
});
I have an ASP.NET MVC 3 (.NET 4) web application.
This app fetches data from an Oracle database and mixes some information with another Sql Database.
Many tables are joined together and lot of database reading is involved.
I have already optimized the best I could the fetching side and I don't have problems with that.
I've use caching to save information I don't need to fetch over and over.
Now I would like to build a responsive interface and my goal is to present the users the order headers filtered, and load the order lines in background.
I want to do that cause I need to manage all the lines (order lines) as a whole cause of some calculations.
What I have done so far is using jQuery to make an Ajax call to my action where I fetch the order headers and save them in a cache (System.Web.Caching.Cache).
When the Ajax call has succeeded I fire off another Ajax call to fetch the lines (and, once again, save the result in a cache).
It works quite well.
Now I was trying to figure out if I can move some of this logic from the client to the server.
When my action is called I want to fetch the order header and start a new thread - responsible of the order lines fetching - and return the result to the client.
In a test app I tried both ThreadPool.QueueUserWorkItem and Task.Factory but I want the generated thread to access my cache.
I've put together a test app and done something like this:
TEST 1
[HttpPost]
public JsonResult RunTasks01()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
ThreadPool.QueueUserWorkItem(o => MyFunc(1, 5000000, myCache));
return (Json(true, JsonRequestBehavior.DenyGet));
}
TEST 2
[HttpPost]
public JsonResult RunTasks02()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
Task.Factory.StartNew(() =>
{
MyFunc(1, 5000000, myCache);
});
return (Json(true, JsonRequestBehavior.DenyGet));
}
MyFunc crates a list of items and save the result in a cache; pretty silly but it's just a test.
I would like to know if someone has a better solution or knows of some implications I might have access the cache in a separate thread?!
Is there anything I need to be aware of, I should avoid or I could improve ?
Thanks for your help.
One possible issue I can see with your approach is that System.Web.HttpContext.Current might not be available in a separate thread. As this thread could run later, once the request has finished. I would recommend you using the classes in the System.Runtime.Caching namespace that was introduced in .NET 4.0 instead of the old HttpContext.Cache.