"Address in use" error while running nodejs in jupyter - jupyter-lab

When i'm using ijavascript in Jupyter to perform fiebase query, an error regarding address in use pop out.
i tried to restart my machine and shut down all other processes that might use port 8888
... \npm\node_modules\ijavascript\node_modules\zeromq\lib\index.js:451
this._zmq.bindSync(addr);
^
Error: Address in use
at exports.Socket.Socket.bindSync
...
kernel 6d2aa3bc-854c-42e9-9edf-816b3a1dc878 restarted

Related

Oracle SQL Developer Status Error Failure -Test failed: IO Error: The Network Adapter could not establish the connection on linux (Ubuntu)

I recently switched to ubuntu and when I installed and tried to create a connection on Oracle SQL Developers 22.2.1 I get a error
Status : Failure -Test failed: IO Error: The Network Adapter could not establish the connection
Ive tried changing the ports and switching between JDK 19,17 and 11 non of them seem to fix the error, all the guides that i can find on this error seem to be specifically for windows.
any help is appreciated and apologies if I have couldnt explain my problem clearly.
Testing connection of server should produce successful but its giving Status : Failure -Test failed: IO Error: The Network Adapter could not establish the connection

Just installed mongodb cant connect to server on Ubuntu 22.04

I just completed an installation of mongodb in my computer and, after starting and enabling it I keep getting the same error:
MongoDB shell version v4.2.23
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2022-11-10T16:25:32.787+0100 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:353:17
#(connect):2:6
2022-11-10T16:25:32.788+0100 F - [main] exception: connect failed
2022-11-10T16:25:32.788+0100 E - [main] exiting with code 1
I uninstalled and installed multiple times and tried solutions that worked for other people in similar posts but I can not make it work.
I have a feeling it have something to do with the port it is looking for (27017) but I do not know how to check if that is the problem and, if so, how to fix it.
Thanks!
Here are a few things to consider:
Check if the MongoDB process is running (you can verify using ps, netstat (check listen port) command in the server.
See if your MongoDB service runs locally. That is because by default, the listen address of MongoDB is localhost/loopback.
Check if the firewall is running and you have allowed the port.
Verify if you are able to use telnet to command the port 27017 from the application server.

Virtuoso connection failed

I have virtuoso 6.1 in ubuntu. I was loading a dbpedia RDF but the server where virtuoso is setup disconnected while doing it, giving me this error:
Error 08S01: [Virtuoso Driver]CL065: Lost connection to server at line
2 of Top-Level: rdf_loader_run()
Since then I only get this error:
Error S2801: [Virtuoso Driver]CL033: Connect failed to localhost:1111
= localhost:1111.
I tried to kill virtuoso and restart it but nothing changes, I also tried changing the port but it still tries to connect to port 1111, and when I specify the new port it gives me the same message. Does anyone knows how to solve this issue?

Did start spark job server properly and out of memory response

I am using spark-jobserver-0.6.2-spark-1.6.1
(1) export OBSERVER_CONFIG = /custom-spark-jobserver-config.yml
(2)./server_start.sh
Execution of the above start shell file returns without error. However, it created a pid file: spark-jobserver.pid
When I cat spark-jobserver.pid, the pid file shows pid=126633
However when I ran
lsof -i :9999 | grep LISTEN
It shows
java 126634 spark 17u IPv4 189013362 0t0 TCP *:distinct (LISTEN)
I deployed my scala application to job server below, it returned with OK
curl --data-binary #analytics_2.10-1.0.jar myhost:8090/jars/myservice
OK
When I ran the following curl command to test REST service deployed on job server
curl -d "{data.label.index:15, data.label.field:ROOT_CAUSE,input.stri ng:[\"tt: Getting operation. \"]}" 'myhost:8090/jobs? appName=myservice&classPath=com.test.Test&sync=true&timeout=400'
I got the following out of memory returned response
{
"status": "ERROR",
"result": {
"errorClass": "java.lang.RuntimeException",
"cause": "unable to create new native thread",
"stack": ["java.lang.Thread.start0(Native Method)", "java.lang.Thread.start(Thread.java:714)", "org.spark-project.jetty.util.thread.QueuedThreadP ool.startThread(QueuedThreadPool.java:441)", "org.spark-project.jetty.util.thread.QueuedThreadPool.doStart(QueuedThreadPool.java:108)", "org.spark-pr oject.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)", "org.spark-project.jetty.util.component.AggregateLifeCycle.doStart(Ag gregateLifeCycle.java:81)", "org.spark-project.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:58)", "org.spark-project.jetty.serve r.handler.HandlerWrapper.doStart(HandlerWrapper.java:96)", "org.spark-project.jetty.server.Server.doStart(Server.java:282)", "org.spark-project.jetty .util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)", "org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(Jetty Utils.scala:252)", "org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)", "org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUti ls.scala:262)", "org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)", "scala.collection.immutable.Range.foreac h$mVc$sp(Range.scala:141)", "org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)", "org.apache.spark.ui.JettyUtils$.startJettyServer(Je ttyUtils.scala:262)", "org.apache.spark.ui.WebUI.bind(WebUI.scala:137)", "org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)", " org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)", "scala.Option.foreach(Option.scala:236)", "org.apache.spark.SparkContext.(SparkContext.scala:481)", "spark.jobserver.context.DefaultSparkContextFactory$$anon$1.(SparkContextFactory.scala:53)", "spark.jobserver.co ntext.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:53)", "spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkCon textFactory.scala:48)", "spark.jobserver.context.SparkContextFactory$class.makeContext(SparkContextFactory.scala:37)", "spark.jobserver.context.Defau ltSparkContextFactory.makeContext(SparkContextFactory.scala:48)", "spark.jobserver.JobManagerActor.createContextFromConfig(JobManagerActor.scala:378) ", "spark.jobserver.JobManagerActor$$anonfun$wrappedReceive$1.applyOrElse(JobManagerActor.scala:122)", "scala.runtime.AbstractPartialFunction$mcVL$sp .apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.ru ntime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorSt ack.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFuncti on$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala .common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:26)", "ooyala.common.akka.Slf4jLogging$class.ooya la$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:35)", "ooyala.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogg ing.scala:25)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFuncti on$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala .common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:24)", "akka.actor.Actor$class.aroundReceive(Actor.scala:467)", "ooyala.co mmon.akka.InstrumentedActor.aroundReceive(InstrumentedActor.scala:8)", "akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)", "akka.actor.ActorC ell.invoke(ActorCell.scala:487)", "akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)", "akka.dispatch.Mailbox.run(Mailbox.scala:220)", "akka.di spatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)", "scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask .java:260)", "scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)", "scala.concurrent.forkjoin.ForkJoinPool.runWorker(Fo rkJoinPool.java:1979)", "scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)"],
"causingClass": "java.lang.OutOfMemoryError",
"message": "java.lang.OutOfMemoryError: unable to create new native thread"
My question
(1) Why processID is different as shown in pid file ? 126633 vs 126634 ?
(2) Why spark-jobserver.pid is created ? Does this mean spark job server is not started properly ?
(3) How to start job server properly ?
(4) What causes out of memory response ? How to resolve it ? Is this because I did not set Heap Size or memory correctly ? How to resolve it ?
Jobserver binds to 8090 and not to 9999, may be you should look for that process id.
Spark jobserver pid is created for tracking purpose. It does not mean that job server is not started properly.
You are starting spark-jobserver properly.
May be try increasing value of JOBSERVER_MEMORY, default is 1G. Did you check on Spark UI whether application started properly?

Address already in use error when Apache Felix starts

I deleted the directory felix-cache. When I started again the Felix framework I get this error:
ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [debugInit.c:750]
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
Any idea how I can fix this?
Another process is still running using the specific port. Check remaining processes using ps -ef | grep java and kill it.
You seem to be launching the JVM in remote debugging mode, but there is another JVM running that is also in remote debug mode using the same port number. You cannot share the port number between multiple processes. If you need to debug two Java programs simultaneously then you will have to configure them to use different ports.

Resources