My cucumber/ruby/selenium scripts run by starting up a chromebrowser (2.3) session before every test and tearing down after. They have been working for months, but now I have a real problem.
I have 600 scenarios, but when I get to a specific number of scenarios run (251), it bombs out with this message:
Resource temporarily unavailable - chromedriver.exe --port=8602 (Errno::EAGAIN)
If I remove the scenario at which it fails, the issue happens at the next scenario. I tried removing many scenarios but still happens.
I also tried randomizing the ports as I thought that chromedriver connecting to the same port was causing the issue.
I'm stumped!
Related
I'm referring to some docs and tutorials for the Azure DP-100 Exam related to Data Science.
On creating a Compute Instance (STANDARD_DS11_V2), I opened the Jupyter notebook, and cloned a sample repository (https://github.com/microsoftdocs/ml-basics)
After this, I'm not able to load or see the files inside the ml-basics folder on Jupyter.
Nothing happens on clicking on ml-basics folder, apart from the below error message after a long wait -
Timed out attempting to connect to this compute instance.
Check to make sure the compute instance is started. If you just started it, try again in a minute or two.
I'm not able to reproduce this issue, works fine for me. As mentioned in the error message, ensure your instance is up and running. If it is, try restarting it and also ensure there are no firewalls blocking your connection.
I'm running a 5 Nodes Mapr Drill cluster, and everything is working fine, except that sometimes (can be multiple time during the day, sometimes once in a few days, no specific pattern), when I try to connect to one of the drillbits (Via Drill Web-UI / PyDrill), the login is failing with Invalid username/password error, even though the password and username are correct!
Scripts that trying to open connections with PyDrill also failing with the same error.
The issue is resolved by itself after a while, or when I restart the affected drillbit with maprcli command.
this issue occurs only on specifc drillbits and not on all. (Usually node #1. It happens on others too, but only a few times. On the first node it happens almost on a daily basis)
Login is failing with all users. Mapr user, AD users, etc.
Did anyone encounter this? I'm trying to find the root cause and a solution. I suspect it is happening when the cluster is running on low memory so the login service (PAM) is failing.
Thanks!
I have just started with Node-RED and InfluxDB, and I would like to apologise if this is a very silly question.
There was a network disconnection on my server earlier - after reconnecting the server back to the network, the error Error: read ECONNRESET is frequently showing whenever receiving an MQTT signal and trying to write it into influxdb.
A little bit of the background on my work - I am working on an Industrial IoT project, where each machines will send in signals via MQTT to Node-RED, get processed in Node-RED and log into influxDB. The code has been running without issue before the network disconnection, and I have seen other posts stating that restarting Node-RED would solve the problem - but I cannot afford to restart it unless schedule a time with the factory - till then, more data will be loss.
"Error: read ECONNRESET"
This error is happening at many different influxdb nodes - not a single specific incident. Is there anyway to resolve this without having to restart Node-RED?
Thank you
Given that it's not storing any data at the moment, I would say take the hit and restart Node-RED as soon as possible.
The other option is if you are on a recent Node-RED release is to just restart the flow. You can do this from the bottom of the drop down menu on the Deploy button. This will leave Node-RED running and just stop all the nodes and restart them. This will be quicker than a full restart.
I assume you are using the node-red-contrib-influxdb node. It looks to be using the Influx npm node under the covers. I can't see anything obvious in the doc about configuring it to reconnect in case of a failure with the database. I suggest you set up a test system and then try and reproduce this by restarting the DB, if you can then you can open an issue with the node-red-contrib-influxdb on github and see if they can work out how to get it to reconnect after a failure.
There was a power outage one day and have restarted the whole system. Now the database is working fine. It worked, and I didn't know why. Hope this would help.
When using the JupyterLab found within the azure ML compute instance, every now and then, I run into an issue where it will say that network connection is lost.
I have confirmed that the computer is still running.
the notebook itself can be edited and saved, so the computer/VM is definitely running
Of course, the internet is fully functional
On the top right corner next to the now blank circle it will say "No Kernel!"
We can't repro the issue, can you help gives us more details? One possibility is that the kernel has bugs and hangs (could be due to extensions, widgets installed) or the resources on the machine are exhausted and kernel dies. What VM type are you using? If it's a small VM you may ran out of resources.
Having troubleshooted the internet I found that you can force a reconnect (if you wait long enough, like a few minutes, it will do on its own) by using Kernel > Restart Kernel.
Based on my own experience, it seems like this is a fairly common issue but I did spend a few minutes figuring it out. Hope this helps others who are using this.
Check your browser console for any language-pack loading errors.
Part of our team had this issue this week, the root cause for us was some language-packs for pt-br not loading correctly, once the affected team members changed the page/browser language to en-us the problem was solved.
I have been dealing with the same issue, after some research around this problem I learnt my firewall was blocking JupyterLab, Jupyter and terminal, allowing the access to it solved the issue.
I have gitlab setup with runners on dedicated VM machine (24GB 12 vCPUs and very low runner concurrency=6).
Everything worked fine until I've added more Browser tests - 11 at the moment.
These tests are in stage browser-test and start properly.
My problem is that, it sometimes succeeds and sometimes not, with totally random errors.
Sometimes it cannot resolve host, other times unable to find element on page..
If I rerun these failed tests, all goes green always.
Anyone has an idea on what is going wrong here?
BTW... I've checked, this dedicated VM is not overloaded...
I have resolved all my initial issues (not tested with full machine load so far), however, I've decided to post some of my experiences.
First of all, I was experimenting with gitlab-runner concurrency (to speed things up) and it turned out, that it really quickly filled my storage space. So for anybody experiencing storage shortcomings, I suggest installing this package
Secondly, I was using runner cache and artifacts, which in the end were cluttering my tests a bit, and I believe, that was the root cause of my problems.
My observations:
If you want to take advantage of cache in gitlab-runner, remember that by default it is accessible on host where runner starts only, and remember that cache is retrieved on top of your installation, meaning it overrides files from your project.
Artifacts are a little bit more flexible, cause they are stored/fetched from your gitlab installation. You should develop your own naming convention (using vars) for them to control, what is fetched/cached between stages and to make sure all is working, as you would expect.
Cache/Artifacts in your tests should be used with caution and understanding, cause they can introduce tons of problems, if not used properly...
Side note:
Although my VM machine was not overloaded, certain lags in storage were causing timeouts in the network and finally in Dusk, when running multiple gitlab-runners concurrently...
Update as of 2019-02:
Finally, I have tested this on a full load, and I can confirm my earlier side note, about machine overload is more than true.
After tweaking Linux parameters to handle big load (max open files, connections, sockets, timeouts, etc.) on hosts running gitlab-runners, all concurrent tests are passing green, without any strange, occasional errors.
Hope it helps anybody with configuring gitlab-runners...