How to show a page on install but not on update [duplicate] - google-chrome-extension

I have a question about chrome extension install/update event. If I add the onInstalled event listener in a top level code in the background script, is there a time frame in which my event listener will catch that event?
I'm asking this, because my demos showed that if I have some logic that executes before I hook onInstalled listener, it looks like it will never be executed, like that event happens in the meantime.
Can someone explain to me with more details how this event works, in the context of other logic in the background script, or point me to some documentation, since I haven't been able to find anything useful.
Thanks!
Update #Noam Hacker : Due to a company policy I can't post any real code here, but I have some pseudo code that illustrates my problem :
/**
* setup in which I miss onInstalled event
*/
function firstLogicThatRunsOnBackgroundLoad() {
// perform some logic
// perform some asynchronous operations via generators and promises
// which can take a while
chrome.runtime.onInstalled.addListener(function (details) {
if (details.reason == "install") {
// this logic never gets executed
} else if(details.reason == "update") {
// perform some logic
}
});
}
/**
* setup in which I catch onInstalled event
*/
function firstLogicThatRunsOnBackgroundLoad() {
chrome.runtime.onInstalled.addListener(function (details) {
if (details.reason == "install") {
// this logic executes
} else if(details.reason == "update") {
// perform some logic
}
});
// perform some logic
// perform some asynchronous operations via generators and promises
// which can take a while
}

onInstalled listeners catch events in these situations:
when the extension is first installed, when the extension is updated to a new version, and when Chrome is updated to a new version.
Since this is all asynchronous it will happen in the background, and according the documentation, fires immediately at any of these situations. Review asynchronous programming for some clarity on this.
link to documentation
According to your question it seems like you want help executing code in the right order. This answer provides a helpful framework for your case (using the reason attribute).
chrome.runtime.onInstalled.addListener(function(details){
if(details.reason == "install"){
//call a function to handle a first install
}else if(details.reason == "update"){
//call a function to handle an update
}
});

I needed to figure this out too. While I didn't find anything authoritative, I did throw a couple of console.time() statements in my background script.
Code was something like this:
console.time('onInstall event');
console.time('first function');
chrome.runtime.onInstalled.addListener(details => {
console.timeEnd('onInstall event');
});
// 7 module imports
someSyncFunction() // console.timeEnd('first function') is called in the first line in this function
Then I just loaded/reloaded the extension (unpacked, in dev mode) a few times. onInstall seems to pretty reliably fire within the first 50ms, while the first function happens w/in the first ms. Here are the results:
(First function, onInstall event)
(.282ms, 47.2ms)
(.331ms, 45.3ms)
(.327ms, 49.1ms)
(.294ms, 45.9ms)

Given that the document says
“Listeners must be registered synchronously from the start of the page.”
and
“Do not register listeners asynchronously, as they will not be properly triggered.”
, it seems they guarantee every synchronously-attached listener not to miss any, no matter how long it takes to evaluate your code. And this would be done by Chrome firing events after evaluating your entire code.
My hypothesis is that onInstalled actually works like onInitialized. No test data, though.

Related

How to detect that Chrome Extension with Manifest v3 was unloaded

Our Chrome extension has both content and background scripts communicating with each other. When the plugin is updated, the background script is stopped and the content scripts start getting Error: Extension context invalidated.. In V2, we used port.onDisconnect event as described here to clean things up. But in V3, this event is also sent after 5 minutes (when the background service worker is automatically terminated). So this event now means either extension unloading (and the cleanup should be done), or just SW lifecycle event (no need to cleanup, reconnecting is fine).
So the question is, how to unambiguously determine whether the cleanup is necessary.
I've tried:
chrome.management. events: onDisabled etc. But unfortunately chrome.management is undefined in my content script.
Checking for chrome.runtime.id inside port.onDisconnected callback to determine the plugin is unloaded. But the id is still present at that moment.
Again inside port.onDisconnected, trying to do chrome.runtime.connect() again and catching the exception. But there's no exception! The port is created successfully, but it receives neither messages nor its own onDisconnected events.
Trying point 3 inside setTimeout(..., 0) and setTimeout(..., 100). The former doesn't produce exceptions either. The latter does, but it introduces a delay of questionable duration (why 100? would it work the CPU is overloaded?) and potential race conditions when other plugin functionality could try to send messages with unpredictable results. So I'd appreciate a more bullet-proof solution.
Thanks to wOxxOm's suggestions, I've found a solution that seems to work for now: every once in a while (<5 seconds) to disconnect the port in the content script and then reconnect again. The code looks like this:
let portToBackground: chrome.runtime.Port | undefined = openPortToBackground();
function openPortToBackground(): chrome.runtime.Port {
const port = chrome.runtime.connect();
const timeout = setTimeout(() => {
console.log('reconnecting');
portToBackground = openPortToBackground();
port.disconnect();
}, 2 * 60 * 1000); // 2 minutes here, just to be sure
port.onDisconnect.addListener(() => {
clearTimeout(timeout);
if (port !== portToBackground) return;
// perform the cleanup
});
return port;
}
export function isExtensionContextInvalidated(): boolean {
return !portToBackground;
}

Node Js Complex Design Principle (Promise, async/await)

This is a common process for me in my previous works, so i usually have a very complex use case take for example
async function doThis(){
for (100x) {
try {
insertToDatabase()
await selectAndManipulateData()
createEmailWorker()
/** and many more **/
} catch {
logToAFile()
}
}
}
The code works, but its complicated 1 function doing all the things, the only reason i do this is because i can verify in real time if one function fails i can make sure the other function wont run so there wont be any incorrect data.
What i want to know is, what is the best architecture in defining a project structure that is not sacrificing the data integrity? (or is it already good enough?)
const doThis = async() => {
try {
for (100x) {
await insertToDatabase();
await selectAndManipulateData();
await createEmailWorker();
/** and many more **/
}
}
catch {
await logToAFile();
}
}
The best way of doing this is, you should always use await to call any function and make sure to with es6 syntax's as it gives a lot more feature. Your function should always be an async.
Always put your loop in try catch as it will give you any error in catch and it will calling function specific.
Actually, I would separate the persistence, manipulation and email jobs. Consider storing your data is a single responsibility. In addition to this, your modification and email workers should work as scheduled jobs. Once the jobs triggered, they should check if there is data related to its responsibility.
Another way is changing these scheduled jobs with triggered jobs. You can build a chain of responsibility that triggers next jobs and they would decide to work or not.

State not being set properly - React Native

I am very confused by what I am getting from my code. I have the following which should log out data.points then set this.state.points to data.points and then log out this.state.points, however when it logs them out they are not equal. This is the exact code I am using, so I am sure it is what was outputted. I am probably overlooking something, but I have spent the past hour reading over and logging out this code, and I still cannot figure it out. Here is the code that I run:
console.log(data.points);
if (!this.state.hasPressed) {
this.setState({points: data.points})
console.log('in not hasPressed if');
}
console.log(this.state.points);
However in the chrome remote debugger I get this:
["114556548393525038426"]
in not hasPressed if
[]
setState is an asynchronous call. you have to use function callback to wait for setState complete.
console.log(data.points);
if (!this.state.hasPressed) {
this.setState({points: data.points}, () => {
console.log(this.state.points);
});
console.log('in not hasPressed if');
}
refer to react-native setState() API:
setState(updater, [callback])
setState() does not always immediately update the component. It may
batch or defer the update until later. This makes reading this.state
right after calling setState() a potential pitfall. Instead, use
componentDidUpdate or a setState callback (setState(updater,
callback)), either of which are guaranteed to fire after the update
has been applied. If you need to set the state based on the previous
state, read about the updater argument below.

Waiting for JavaScript event with Selenium

I am building an automation framework on top of Selenium (Node.js) consisting on a number of steps.
Each step follows the previous one, after it completes, returning a promise (like the one returned by Selenium's driver.click(), etc).
Is it possible to wait for a JavaScript event to trigger on the browser? If so, what is the pattern to follow?
Use .executeAsyncScript to wait for an event to occur :
driver.executeAsyncScript(function(callback) {
window.addEventListener('message', function onmessage() {
window.removeEventListener('message', onmessage);
callback();
});
});
The doc:
http://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/lib/webdriver_exports_WebDriver.html#executeAsyncScript

How to avoid the need to delay event emission to the next tick of the event loop?

I'm writing a Node.js application using a global event emitter. In other words, my application is built entirely around events. I find this kind of architecture working extremely well for me, with the exception of one side case which I will describe here.
Note that I do not think knowledge of Node.js is required to answer this question. Therefore I will try to keep it abstract.
Imagine the following situation:
A global event emitter (called mediator) allows individual modules to listen for application-wide events.
A HTTP Server is created, accepting incoming requests.
For each incoming request, an event emitter is created to deal with events specific to this request
An example (purely to illustrate this question) of an incoming request:
mediator.on('http.request', request, response, emitter) {
//deal with the new request here, e.g.:
response.send("Hello World.");
});
So far, so good. One can now extend this application by identifying the requested URL and emitting appropriate events:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
emitter.emit('root');
}
else {
emitter.emit('404');
}
});
Following this one can write a module that will deal with a root request.
mediator.on('http.request', function(request, response, emitter) {
//when root is requested
emitter.once('root', function() {
response.send('Welcome to the frontpage.');
});
});
Seems fine, right? Actually, it is potentially broken code. The reason is that the line emitter.emit('root') may be executed before the line emitter.once('root', ...). The result is that the listener never gets executed.
One could deal with this specific situation by delaying the emission of the root event to the end of the event loop:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
process.nextTick(function() {
emitter.emit('root');
});
}
else {
process.nextTick(function() {
emitter.emit('404');
});
}
});
The reason this works is because the emission is now delayed until the current event loop has finished, and therefore all listeners have been registered.
However, there are many issues with this approach:
one of the advantages of such event based architecture is that emitting modules do not need to know who is listening to their events. Therefore it should not be necessary to decide whether the event emission needs to be delayed, because one cannot know what is going to listen for the event and if it needs it to be delayed or not.
it significantly clutters and complexifies code (compare the two examples)
it probably worsens performance
As a consequence, my question is: how does one avoid the need to delay event emission to the next tick of the event loop, such as in the described situation?
Update 19-01-2013
An example illustrating why this behavior is useful: to allow a http request to be handled in parallel.
mediator.on('http.request', function(req, res) {
req.onceall('json.parsed', 'validated', 'methodoverridden', 'authenticated', function() {
//the request has now been validated, parsed as JSON, the kind of HTTP method has been overridden when requested to and it has been authenticated
});
});
If each event like json.parsed would emit the original request, then the above is not possible because each event is related to another request and you cannot listen for a combination of actions executed in parallel for a specific request.
Having both a mediator which listens for events and an emitter which also listens and triggers events seems overly complicated. I'm sure there is a legit reason but my suggestion is to simplify. We use a global eventBus in our nodejs service that does something similar. For this situation, I would emit a new event.
bus.on('http:request', function(req, res) {
if (req.url === '/')
bus.emit('ns:root', req, res);
else
bus.emit('404');
});
// note the use of namespace here to target specific subsystem
bus.once('ns:root', function(req, res) {
res.send('Welcome to the frontpage.');
});
It sounds like you're starting to run into some of the disadvantages of the observer pattern (as mentioned in many books/articles that describe this pattern). My solution is not ideal – assuming an ideal one exists – but:
If you can make a simplifying assumption that the event is emitted only 1 time per emitter (i.e. emitter.emit('root'); is called only once for any emitter instance), then perhaps you can write something that works like jQuery's $.ready() event.
In that case, subscribing to emitter.once('root', function() { ... }) will check whether 'root' was emitted already, and if so, will invoke the handler anyway. And if 'root' was not emitted yet, it'll defer to the normal, existing functionality.
That's all I got.
I think this architecture is in trouble, as you're doing sequential work (I/O) that requires definite order of actions but still plan to build app on components that naturally allow non-deterministic order of execution.
What you can do
Include context selector in mediator.on function e.g. in this way
mediator.on('http.request > root', function( .. ) { } )
Or define it as submediator
var submediator = mediator.yield('http.request > root');
submediator.on(function( ... ) {
emitter.once('root', ... )
});
This would trigger the callback only if root was emitted from http.request handler.
Another trickier way is to make background ordering, but it's not feasible with your current one mediator rules them all interface. Implement code so, that each .emit call does not actually send the event, but puts the produced event in list. Each .once puts consume event record in the same list. When all mediator.on callbacks have been executed, walk through the list, sort it by dependency order (e.g. if list has first consume 'root' and then produce 'root' swap them). Then execute consume handlers in order. If you run out of events, stop executing.
Oi, this seems like a very broken architecture for a few reasons:
How do you pass around request and response? It looks like you've got global references to them.
If I answer your question, you will turn your server into a pure synchronous function and you'd lose the power of async node.js. (Requests would be queued effectively, and could only start executing once the last request is 100% finished.)
To fix this:
Pass request & response to the emit() call as parameters. Now you don't need to force everything to run synchronously anymore, because when the next component handles the event, it will have a reference to the right request & response objects.
Learn about other common solutions that don't need a global mediator. Look at the pattern that Connect was based on many Internet-years ago: http://howtonode.org/connect-it <- describes middleware/onion routing

Resources