Google Chrome extension restarting itself - google-chrome-extension

I'm building a Chrome extension for a client. Everything has always run smoothly on my end, but when they run it on their computer, there has always been little glitches that were never repeatable (so annoying). Yesterday, I finially nailed down (hopefully) the last piece of the puzzle—on there machine, the extenssion is restarting itself mid browsing session. I know I never explicitly coded this in myself, and I have never experienced it on my own machine. To clarify, the type of restart that's happening (as there are at least 3 different types of restarts) seems to be the same as refreshing the background page—storage is kept in tact and onInstall handlers are not run. Anyone have any idea what could be causing such an issue?
The code is too big to post here, but I can try to post some snippits of it if anyone wants to see where a specific API is used or something.

It's been a while but I'm pretty sure the problem was what I speculated about in the comments. I needed to set persistent to true in my manifest.json. Having it set to false means the background script "goes to sleep". I believe it going to sleep and waking back up was the restart I was experiencing.

Related

Determining Website Crash Time on Linux Server

2.5 months ago, I was running a website on a Linux server to do a user study on 3 variations of a tool. All 3 variations ran on the same website. While I was conducting my user study, the website (i.e., process hosting the website) crashed. In my sleep-deprived state, I unfortunately did not record when the crash happened. However, I now need to know a) when the crash happened, and b) for how long the website was down until I brought it back up. I only have a rough timeframe for when the crash happened and for long it was down, but I need to pinpoint this information as precisely as possible to do some time-on-task analyses with my user study data.
The server runs Linux 16.04.4 LTS (GNU/Linux 4.4.0-165-generic x86_64) and has been minimally set up to run our website. As such, it is unlikely that any utilities aside from those that came with the OS have been installed. Similarly, no additional setup has likely been done. For example, I tried looking at a history of commands used in hopes that HISTTIMEFORMAT was previously set so that I could see timestamps. This ended up not being the case; while I can now see timestamps for commands, setting HISTTIMEFORMAT is not retroactive, meaning I can't get accurate timestamps for the commands I ran 2.5 months ago. That all being said, if you have an idea that you think might work, I'm willing to try (as long as it doesn't break our server)!
It is also worth mentioning that I currently do not know if it's possible to see a remote desktop or something of the like; I've been just ssh'ing in and use the terminal to interact with the server.
I've been bouncing ideas off with friends and colleagues, and we all feel that there must be SOMETHING we could use to pinpoint when the server went down (e.g., network activity logs showing spikes around the time that the user study began as well as when the website was revived, a log of previous/no longer running processes, etc.). Unfortunately, none of us know about Linux logs or commands to really dig deep into this very specific issue.
In summary:
I need a timestamp for either when the website crashed or when it was revived. It would be nice to have both (or otherwise determine for how long the website was down for), but this is not completely necessary
I'm guessing only a "native" Linux command will be useful since nothing new/special has been installed on our server. Otherwise, any additional command/tool/utility will have to be retroactive.
It may or may not be possible to get a remote desktop working with the server (e.g., to use some tool that has a GUI you interact with to help get some information)
Myself and my colleagues have that sense of "there must be SOMETHING we could use" between various logs or system information, such at network activity, process start times, etc., but none of us know enough about Linux to do deep digging without some help
Any ideas for what I can try to help figure out at least when the website crashed (if not also for how long it was down)?
A friend of mine pointed me to the journalctl command, which apparently maintains timestamps of past commands separately from HISTTIMEFORMAT and keeps logs that for me went as far back as October 7. It contained enough information for me to determine both when I revived my Node js server as well as when my Node js server initially went down

"No Kernel!" error Azure ML compute JupyterLab

When using the JupyterLab found within the azure ML compute instance, every now and then, I run into an issue where it will say that network connection is lost.
I have confirmed that the computer is still running.
the notebook itself can be edited and saved, so the computer/VM is definitely running
Of course, the internet is fully functional
On the top right corner next to the now blank circle it will say "No Kernel!"
We can't repro the issue, can you help gives us more details? One possibility is that the kernel has bugs and hangs (could be due to extensions, widgets installed) or the resources on the machine are exhausted and kernel dies. What VM type are you using? If it's a small VM you may ran out of resources.
Having troubleshooted the internet I found that you can force a reconnect (if you wait long enough, like a few minutes, it will do on its own) by using Kernel > Restart Kernel.
Based on my own experience, it seems like this is a fairly common issue but I did spend a few minutes figuring it out. Hope this helps others who are using this.
Check your browser console for any language-pack loading errors.
Part of our team had this issue this week, the root cause for us was some language-packs for pt-br not loading correctly, once the affected team members changed the page/browser language to en-us the problem was solved.
I have been dealing with the same issue, after some research around this problem I learnt my firewall was blocking JupyterLab, Jupyter and terminal, allowing the access to it solved the issue.

Screen closes and exits during node.js server long process

I don't have Sudo access, so currently i can't install 'Forever' https://www.npmjs.com/package/forever
Instead i am simply using 'Screen'.
I am running a node.js server, at a random point, the node server stops, and screen exits. I cannot seem to collect any error data on this. I seem to be completely unaware of why its happening and cannot think of a way to catch what is happening. It doesn't happen often (maybe 1 time per day). When i load putty back up and login to my Apache server through terminal, i type screen -x or screen -r and it tells me there are no screens attached. The node server process definitely stops because the app it runs stops working.
Obviously i can't post all the code here, there is tons of it. But everything appears to work wonderfully, except every now and then, something goes wrong and it closes the attached screen.
If there was a problem with the node server, i would expect a crash, and the attached screen would stay attached. There would be an error outputted to the terminal for me to see when i open it. But in this case, it totally closes the attached screen.
Does anybody know what kind of error can cause this?
On a side note, is there an alternative to 'Forever' that can be installed without Sudo access?
My node version wasn't correct which is why Forever wasn't installing. I didn't need SUDO after all. I am now using Forever and hopefully this will shed light on what is going on as i have a out.log file which should catch whatever the problem is. :-)

gitlab runner errors occasionally

I have gitlab setup with runners on dedicated VM machine (24GB 12 vCPUs and very low runner concurrency=6).
Everything worked fine until I've added more Browser tests - 11 at the moment.
These tests are in stage browser-test and start properly.
My problem is that, it sometimes succeeds and sometimes not, with totally random errors.
Sometimes it cannot resolve host, other times unable to find element on page..
If I rerun these failed tests, all goes green always.
Anyone has an idea on what is going wrong here?
BTW... I've checked, this dedicated VM is not overloaded...
I have resolved all my initial issues (not tested with full machine load so far), however, I've decided to post some of my experiences.
First of all, I was experimenting with gitlab-runner concurrency (to speed things up) and it turned out, that it really quickly filled my storage space. So for anybody experiencing storage shortcomings, I suggest installing this package
Secondly, I was using runner cache and artifacts, which in the end were cluttering my tests a bit, and I believe, that was the root cause of my problems.
My observations:
If you want to take advantage of cache in gitlab-runner, remember that by default it is accessible on host where runner starts only, and remember that cache is retrieved on top of your installation, meaning it overrides files from your project.
Artifacts are a little bit more flexible, cause they are stored/fetched from your gitlab installation. You should develop your own naming convention (using vars) for them to control, what is fetched/cached between stages and to make sure all is working, as you would expect.
Cache/Artifacts in your tests should be used with caution and understanding, cause they can introduce tons of problems, if not used properly...
Side note:
Although my VM machine was not overloaded, certain lags in storage were causing timeouts in the network and finally in Dusk, when running multiple gitlab-runners concurrently...
Update as of 2019-02:
Finally, I have tested this on a full load, and I can confirm my earlier side note, about machine overload is more than true.
After tweaking Linux parameters to handle big load (max open files, connections, sockets, timeouts, etc.) on hosts running gitlab-runners, all concurrent tests are passing green, without any strange, occasional errors.
Hope it helps anybody with configuring gitlab-runners...

NodeJS Express App still works despite forgetting app.listen()

I had a strange problem with my routes in my NodeJS project and it turns out that the reason was that I forgot to add app.listen() at the end.
Everything else was written and working normally though. That is to say, my pages were being served and all the code seemed fine.
But I'm puzzled. Why did my code work before I had app.listen() in there?
I don't intend to post the whole project, I'm just looking for helpful suggestions and scenarios people can think of regarding how this can possibly happen.
Really can't give a definitive answer without the code, but it might have to do with your development environment. You might have included app.listen() in a previous iteration and executed the program. In some operating systems, closing out of terminal won't kill the current process, but keep it running. So, your server could have been running the whole time, and you were getting routing problems because whenever you ran your new code, it wasn't listening on the proper port.
Though, this might not apply to your situation.

Resources