This question already has answers here:
Why are Futures within Futures running sequentially when started on Akka Dispatcher
(3 answers)
Closed 3 years ago.
I'm trying to test the ExecutionContext behaviour in a play app, and found that I'm not able to achieve any degree of parallelism when I'm using the default dispatcher either by calling as.dispatcher, as.dispatchers.lookup("akka.actor.default-dispatcher") or passing the default execution context as a parameter to my Controller class:
class HomeController #Inject()(cc: ControllerComponents)(implicit ec: ExecutionContext)
I'm building on the play examples available in here. And adding/altering the following configuration:
routes
GET /futures controllers.HomeController.testFutures(dispatcherId: String)
common.conf
akka {
my-dispatcher {
executor = "fork-join-executor"
fork-join-executor {
# vm-cores = 4
parallelism-min = 4
parallelism-factor = 2.0
# 2x vm-cores
parallelism-max = 8
}
}
actor.default-dispatcher {
executor = "fork-join-executor"
fork-join-executor {
# vm-cores = 4
parallelism-min = 4
parallelism-factor = 2.0
# 2x vm-cores
parallelism-max = 8
}
}
}
HomeController
#Singleton
class HomeController #Inject()(cc: ControllerComponents, as: ActorSystem) extends AbstractController(cc) {
import HomeController._
def testFutures(dispatcherId: String) = Action.async { implicit request =>
implicit val dispatcher = as.dispatchers.lookup(dispatcherId)
Future.sequence((0 to 10).map(i => Future {
val time = 1000 + Random.nextInt(200)
log.info(s"Sleeping #$i for $time ms")
Thread.sleep(time)
log.info(s"Awakening #$i")
})).map(_ => Ok("ok"))
}
}
For some reason, calls to http://localhost:9000/futures?dispatcherId=akka.actor.default-dispatcher (default dispatcher) don't parallelize and produce the following output:
[info] c.HomeController - Sleeping #0 for 1044 ms
[info] c.HomeController - Awakening #0
[info] c.HomeController - Sleeping #1 for 1034 ms
[info] c.HomeController - Awakening #1
[info] c.HomeController - Sleeping #2 for 1031 ms
[info] c.HomeController - Awakening #2
[info] c.HomeController - Sleeping #3 for 1065 ms
[info] c.HomeController - Awakening #3
[info] c.HomeController - Sleeping #4 for 1082 ms
[info] c.HomeController - Awakening #4
[info] c.HomeController - Sleeping #5 for 1057 ms
[info] c.HomeController - Awakening #5
[info] c.HomeController - Sleeping #6 for 1090 ms
[info] c.HomeController - Awakening #6
[info] c.HomeController - Sleeping #7 for 1165 ms
[info] c.HomeController - Awakening #7
[info] c.HomeController - Sleeping #8 for 1173 ms
[info] c.HomeController - Awakening #8
[info] c.HomeController - Sleeping #9 for 1034 ms
[info] c.HomeController - Awakening #9
[info] c.HomeController - Sleeping #10 for 1056 ms
[info] c.HomeController - Awakening #10
But calls to this http://localhost:9000/futures?dispatcherId=akka.my-dispatcher (using another dispatcher) parallelize correclty and produce the following output.
[info] c.HomeController - Sleeping #1 for 1191 ms
[info] c.HomeController - Sleeping #0 for 1055 ms
[info] c.HomeController - Sleeping #7 for 1196 ms
[info] c.HomeController - Sleeping #4 for 1121 ms
[info] c.HomeController - Sleeping #6 for 1040 ms
[info] c.HomeController - Sleeping #2 for 1016 ms
[info] c.HomeController - Sleeping #5 for 1107 ms
[info] c.HomeController - Sleeping #3 for 1165 ms
[info] c.HomeController - Awakening #2
[info] c.HomeController - Sleeping #8 for 1002 ms
[info] c.HomeController - Awakening #6
[info] c.HomeController - Sleeping #9 for 1127 ms
[info] c.HomeController - Awakening #0
[info] c.HomeController - Sleeping #10 for 1016 ms
[info] c.HomeController - Awakening #5
[info] c.HomeController - Awakening #4
[info] c.HomeController - Awakening #3
[info] c.HomeController - Awakening #1
[info] c.HomeController - Awakening #7
[info] c.HomeController - Awakening #8
[info] c.HomeController - Awakening #10
[info] c.HomeController - Awakening #9
Any ideas why this could be happening?
I think the behavior is given by akka.actor.default-dispatcher that is of the type BatchingExecutor and this will try optimize in the cases of operations such as map/flatmap by executing them in the same thread to avoid unnecessary schedules . In the case where we are going to block we can indicate it with a hint as scala.concurrent.blocking (Thread.sleep (time)) and in this way a mark is stored in a ThreadLocal[BlockContext] which indicates the intention to block and does not apply the optimizations but throws the operation in another thread.
if you change this line Thread.sleep(time) for this scala.concurrent.blocking(Thread.sleep(time)) you will get the desired behavior
#Singleton
class HomeController #Inject()(cc: ControllerComponents, as: ActorSystem) extends AbstractController(cc) {
import HomeController._
def testFutures(dispatcherId: String) = Action.async { implicit request =>
implicit val dispatcher = as.dispatchers.lookup(dispatcherId)
Future.sequence((0 to 10).map(i => Future {
val time = 1000 + Random.nextInt(200)
log.info(s"Sleeping #$i for $time ms")
scala.concurrent.blocking(Thread.sleep(time))
log.info(s"Awakening #$i")
})).map(_ => Ok("ok"))
}
}
[info] play.api.Play - Application started (Dev) (no global state)
Sleeping #0 for 1062 ms
Sleeping #1 for 1128 ms
Sleeping #2 for 1189 ms
Sleeping #3 for 1105 ms
Sleeping #4 for 1169 ms
Sleeping #5 for 1178 ms
Sleeping #6 for 1057 ms
Sleeping #7 for 1003 ms
Sleeping #8 for 1164 ms
Sleeping #9 for 1029 ms
Sleeping #10 for 1005 ms
Awakening #7
Awakening #10
Awakening #9
Awakening #6
Awakening #0
Awakening #3
Awakening #1
Awakening #8
Awakening #4
Awakening #5
Awakening #2
Related
So I have a project which contains multiple micro-services, the entry point is the Django micro-service(with user login & authentication).
All other Apis are wrapped with the #login_required decorator by Django.
I've deployed it using ngnix, gunicorn inside a kubernetes pod.
Everything works fine with 1 worker(sync) but I obviously want multiple workers, so when I increase my number of workers and threads(gthread), it doesn't let me login redirects to login/?next=/login/5/.
After spending some time, I figured out the following:
Since the default value of keepalive of a worker is 2s, the worker closes connection and a new worker is assigned & since they don't share common memory,my session cookie isn't carried forward.
I tried increasing the keepalive time to 10s , now it lets me login but if someone tried to login when the worker is about to expire(around 10s), same, doesn't login & redirects to login/?next=/login/5/.
Another direction I found was about fixing the secret key as in this post, but I just using the default standard value which I even hardcoded in settings.py, but no luck.
I'll attach my gunicorn config/logs below:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['0.0.0.0:8000']
backlog: 2048
workers: 2
worker_class: sync
threads: 4
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 600
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False reload_engine: auto
reload_extra_files: [] spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /app
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers:{'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s""%(a)s" errorlog: -
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name:
UI-1.wsgi:application
pythonpath: None
paste: None
on_starting:
proxy_protocol:False
proxy_allow_ips: ['127.0.0.1']
keyfile: None certfile:None
ssl_version: 2 cert_reqs: 0 c
a_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers:None
raw_paste_global_conf: []
strip_header_spaces: False
[2022-08-31 14:23:52 +0000] [9] [INFO] Starting gunicorn 20.1.0
[2022-08-31 14:23:52 +0000] [9] [DEBUG] Arbiter booted [2022-08-31
14:23:52 +0000] [9] [INFO] Listening at: http://0.0.0.0:8000 (9)
[2022-08-31 14:23:52 +0000] [9] [INFO] Using worker: gthread
[2022-08-31 14:23:52 +0000] [11] [INFO] Booting worker with pid: 11
[2022-08-31 14:23:52 +0000] [12] [INFO] Booting worker with pid: 12
[2022-08-31 14:23:52 +0000] [9] [DEBUG] 2 workers [2022-08-31 14:23:58
+0000] [11] [DEBUG] GET /
[2022-08-31 14:23:58 +0000] [12] [DEBUG] GET /
[2022-08-31 14:23:58 +0000] [11] [DEBUG] GET / Not Found: / Not
Found: /
[2022-08-31 14:23:58 +0000] [11] [DEBUG] Closing connection.
[2022-08-31 14:23:58 +0000] [11] [DEBUG] Closing connection. Not
Found: / [2022-08-31 14:23:58 +0000] [12] [DEBUG] Closing connection.
[2022-08-31 14:24:13 +0000] [12] [DEBUG] GET / [2022-08-31 14:24:13
+0000] [11] [DEBUG] GET / Not Found: / Not Found: / [2022-08-31 14:24:13 +0000] [11] [DEBUG] Closing connection. [2022-08-31 14:24:13
+0000] [12] [DEBUG] Closing connection. [2022-08-31 14:24:13 +0000] [11] [DEBUG] GET / Not Found: / [2022-08-31 14:24:13 +0000] [11]
[DEBUG] Closing connection. [2022-08-31 14:24:28 +0000] [12] [DEBUG]
GET / [2022-08-31 14:24:28 +0000] [12] [DEBUG] GET / Not Found: / Not
Found: / [2022-08-31 14:24:28 +0000] [12] [DEBUG] Closing connection.
[2022-08-31 14:24:28 +0000] [12] [DEBUG] Closing connection.
[2022-08-31 14:24:28 +0000] [11] [DEBUG] GET / Not Found: /
Let me know what I am doing wrong, unable to figure that out, thanks in advance!
I am using Qt QML. I see some random GUI freezes. I was able to get the back trace of the hang and it looks as follows:
It looks like GUI/render thread is stuck in swap buffers, which is basically the call to the graphics driver to execute the rest of the rendering commands to post the contents to the window.
Has anybody faced this issue ? Any clue about what could be causing this?
- #3 <signal handler called>
#4 0x75a88a0c in pthread_cond_wait () from /opt/btl/data/libpthread-2.23.so
#5 0x74af14ac in gcoOS_GetDisplayBackbuffer (Display=0x16dc710, Window=<optimized out>, context=context#entry=0x7ef910b8,
surface=<optimized out>, Offset=0x7ef910c0, X=0x7ef910c4,
Y=0x7ef910c8) at gc_hal_user_fbdev.c:1015
#6 0x74af2744 in gcoOS_GetDisplayBackbufferEx (Display=<optimized out>, Window=<optimized out>, localDisplay=<optimized out>,
context=context#entry=0x7ef910b8, surface=<optimized out>,
surface#entry=0x7ef910bc, Offset=0x7ef910c4, Offset#entry=0x7ef910c0,
X=0x7ef910c8, X#entry=0x7ef910c4, Y=Y#entry=0x7ef910c8) at
gc_hal_user_fbdev.c:2401
#7 0x74c3cb70 in veglGetDisplayBackBuffer (Display=Display#entry=0x16dd00c, Surface=Surface#entry=0x26bc6f4,
BackBuffer=0x7ef910b8, BackBuffer#entry=0x7ef91138) at
gc_egl_platform.c:217
#8 0x74c37bc4 in _SwapBuffersRegion (Rects=<optimized out>, NumRects=1, Draw=0x26bc6f4, Dpy=0x16dd00c, Thread=0x16dcb5c) at
gc_egl_swap.c:3338
#9 _eglSwapBuffersRegion (Dpy=0x16dd00c, Dpy#entry=<error reading variable: value has been optimized out>, Draw=0x26bc6f4,
Draw#entry=<error reading variable: value has been optimized out>,
NumRects=1, NumRects#entry=<error reading variable: value has been
optimized out>, Rects=<optimized out>, Rects#entry=<error reading
variable: value has been optimized out>) at gc_egl_swap.c:4246
#10 0x7535e708 in veglSwapBuffer_es3 (Dpy=<optimized out>, Draw=<optimized out>, Callback=<optimized out>) at
src/glcore/gc_es_egl.c:354
#11 0x74c38964 in eglSwapBuffers (Dpy=0x16dd00c, Draw=0x26bc6f4) at gc_egl_swap.c:4400
#12 0x72ffc9d8 in QEGLPlatformContext::swapBuffers (this=this#entry=0x26bf928, surface=surface#entry=0x1bddca8) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/platformsupport/eglconvenience/qeglplatformcontext.cpp:447
#13 0x72fba380 in QEglFSContext::swapBuffers (this=0x26bf928, surface=0x1bddca8) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/plugins/platforms/eglfs/api/qeglfscontext.cpp:115
#14 0x765da978 in QOpenGLContext::swapBuffers (this=0x26bc6a0, surface=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qopenglcontext.cpp:1111
#15 0x76a814d4 in QSGGuiThreadRenderLoop::renderWindow (this=0x1bd9a20, window=0x350020) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/scenegraph/qsgrenderloop.cpp:445
#16 0x76af071c in QQuickWindow::event (this=0x1bd95a8, e=0x7ef912f4) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/items/qquickwindow.cpp:1588
#17 0x75d2285c in doNotify (event=<optimized out>, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1099
#18 QCoreApplication::notify (this=<optimized out>, receiver=<optimized out>, event=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1085
#19 0x75d229bc in QCoreApplication::notifyInternal2 ( receiver=receiver#entry=0x1bd95a8, event=event#entry=0x7ef912f4) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1024
#20 0x765aa600 in QCoreApplication::sendEvent (event=0x7ef912f4, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.h:233
#21 QWindowPrivate::deliverUpdateRequest (this=this#entry=0x1bd9618) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qwindow.cpp:2305
#22 0x765aadac in QWindow::event (this=this#entry=0x1bd95a8, ev=ev#entry=0x7ef913f0) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qwindow.cpp:2276
#23 0x76af06e0 in QQuickWindow::event (this=0x1bd95a8, e=0x7ef913f0) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/items/qquickwindow.cpp:1607
#24 0x75d2285c in doNotify (event=<optimized out>, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1099
#25 QCoreApplication::notify (this=<optimized out>, receiver=<optimized out>, event=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1085
#26 0x75d229bc in QCoreApplication::notifyInternal2 (receiver=0x1bd95a8, event=event#entry=0x7ef913f0) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1024
#27 0x75d7e208 in QCoreApplication::sendEvent (event=0x7ef913f0, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.h:233
#28 QTimerInfoList::activateTimers (this=0x16df7b4) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qtimerinfo_unix.cpp:643
#29 0x75d7ea50 in timerSourceDispatch (source=<optimized out>) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:182
#30 idleTimerSourceDispatch (source=<optimized out>) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:229
#31 0x74ddcaec in g_main_dispatch (context=0x16df708) at /usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3154
#32 g_main_context_dispatch (context=context#entry=0x16df708) at /usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3769
#33 0x74ddcd14 in g_main_context_iterate (context=context#entry=0x16df708, block=block#entry=1,
dispatch=dispatch#entry=1, self=<optimized out>) at
/usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3840
#34 0x74ddcdc0 in g_main_context_iteration (context=0x16df708, may_block=may_block#entry=1) at
/usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3901
#35 0x75d7ecb4 in QEventDispatcherGlib::processEvents (this=0x146f2c0, flags=...) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:423
#36 0x75d208e0 in QEventLoop::exec (this=this#entry=0x7ef91524, flags=flags#entry=...) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventloop.cpp:212
#37 0x75d29d08 in QCoreApplication::exec () at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1297
#38 0x00043c24 in ?? ()
#39 0x7576dcf8 in __libc_start_main () from /opt/btl/data/libc-2.23.s
#40 0x0034fd2c in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?)
I am connecting to the google bigquery table using spark-biquery-connector in the IntelliJ IDE. While reading the table and printing it, it doesn't show any records. However, metadata information is pulled from bigquery. Table emp has records present in it.
val spark= SparkSession.builder.appName("my first app")
.config("spark.master", "local")
.getOrCreate()
val myDF = spark.read.format("bigquery").option("credentialsFile", "src\\main\\resources\\gcloud-rkg-cred.json").load("decoded-tribute-279515:gcp_test_db.emp")
val newDF = myDF.select("empid", "empname", "salary")
println(myDF.printSchema)
println(newDF.printSchema)
println(newDF.show)
myDF and newDf printSchemas returns the columns, but newDF.show only returns this - ()
My build.sbt file is as below -
name := "myTestGCPProject"
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.3"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.3"
ibraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.16.1"
Snapshot of schema and data of the table -
After I tried with 0.17.0 as suggested by David in comments below - Following is the error I received.
20/07/23 11:17:43 INFO ComputeEngineCredentials: Failed to detect whether we are running on Google Compute Engine.
root
|-- empid: long (nullable = false)
|-- empname: string (nullable = false)
|-- location: string (nullable = false)
|-- salary: long (nullable = false)
root
|-- empid: long (nullable = false)
|-- empname: string (nullable = false)
|-- salary: long (nullable = false)
20/07/23 11:17:51 INFO DirectBigQueryRelation: Querying table decoded-tribute-279515.gcp_test_db.emp, parameters sent from Spark: requiredColumns=[empid,empname,salary], filters=[]
20/07/23 11:17:51 INFO DirectBigQueryRelation: Going to read from decoded-tribute-279515.gcp_test_db.emp columns=[empid, empname, salary], filter=''
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 49
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 79
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 71
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 69
20/07/23 11:17:52 INFO BlockManagerInfo: Removed broadcast_5_piece0 on raghav-VAIO:49977 in memory (size: 6.5 KB, free: 639.2 MB)
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 88
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 83
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 58
20/07/23 11:17:52 INFO BlockManagerInfo: Removed broadcast_4_piece0 on raghav-VAIO:49977 in memory (size: 20.8 KB, free: 639.2 MB)
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 14
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 62
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 87
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 76
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 8
20/07/23 11:17:52 INFO BlockManagerInfo: Removed broadcast_3_piece0 on raghav-VAIO:49977 in memory (size: 7.2 KB, free: 639.3 MB)
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 9
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 10
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 72
20/07/23 11:17:52 INFO BlockManagerInfo: Removed broadcast_1_piece0 on raghav-VAIO:49977 in memory (size: 4.5 KB, free: 639.3 MB)
20/07/23 11:17:52 INFO ContextCleaner: Cleaned accumulator 42
[error] (run-main-0) com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.rpc.UnknownException: com.google.cloud.spark.bigquery.repackaged.io.grpc.StatusRuntimeException: UNKNOWN: Channel Pipeline: [WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
[error] com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.rpc.UnknownException: com.google.cloud.spark.bigquery.repackaged.io.grpc.StatusRuntimeException: UNKNOWN: Channel Pipeline: [WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:47)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1083)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1174)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
[error] at com.google.cloud.spark.bigquery.repackaged.com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:545)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:515)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
[error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[error] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] Caused by: com.google.cloud.spark.bigquery.repackaged.io.grpc.StatusRuntimeException: UNKNOWN: Channel Pipeline: [WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.Status.asRuntimeException(Status.java:533)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:515)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
[error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[error] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] Caused by: com.google.cloud.spark.bigquery.repackaged.io.netty.channel.ChannelPipelineException: com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.ProtocolNegotiators$ClientTlsHandler.handlerAdded() has thrown an exception; removed.
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:624)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:572)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:515)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.fireProtocolNegotiationEvent(ProtocolNegotiators.java:767)
[error] at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.ProtocolNegotiators$WaitUntilActiveHandler.channelActive(ProtocolNegotiators.java:676)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:209)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1398)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:895)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:305)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[error] at java.lang.Thread.run(Thread.java:748)
[error] Caused by: java.lang.RuntimeException: ALPN unsupported. Is your classpath configured correctly? For Conscrypt, add the appropriate Conscrypt JAR to classpath and set the security provider. For Jetty-ALPN, see http://www.eclipse.org/jetty/documentation/current/alpn-chapter.html#alpn-starting
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.handler.ssl.JdkAlpnApplicationProtocolNegotiator$FailureWrapper.wrapSslEngine(JdkAlpnApplicationProtocolNegotiator.java:122)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.handler.ssl.JdkSslContext.configureAndWrapEngine(JdkSslContext.java:360)
[error] at com.google.cloud.spark.bigquery.repackaged.io.netty.handler.ssl.JdkSslContext.newEngine(JdkSslContext.java:335)
Please help.
Edit: Please make sure you use the spark-bigquery-with-dependencies artifact.
You don't need println() for those methods, please try
myDF.printSchema()
newDF.printSchema()
newDF.show()
When I'm starting a crawl using Nutch 1.15 with this:
/usr/local/nutch/bin/crawl --i -s urls/seed.txt crawldb 5
Then it starts to run and I get this error when it tries to fetch:
2019-02-10 15:29:32,021 INFO mapreduce.Job - Running job: job_local1267180618_0001
2019-02-10 15:29:32,145 INFO fetcher.FetchItemQueues - Using queue mode : byHost
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: threads: 50
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: time-out divisor: 2
2019-02-10 15:29:32,149 INFO fetcher.QueueFeeder - QueueFeeder finished: total 1 records hit by time limit : 0
2019-02-10 15:29:32,234 WARN mapred.LocalJobRunner - job_local1267180618_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.nutch.net.URLExemptionFilters.<init>(URLExemptionFilters.java:39)
at org.apache.nutch.fetcher.FetcherThread.<init>(FetcherThread.java:154)
at org.apache.nutch.fetcher.Fetcher$FetcherRun.run(Fetcher.java:222)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2019-02-10 15:29:33,023 INFO mapreduce.Job - Job job_local1267180618_0001 running in uber mode : false
2019-02-10 15:29:33,025 INFO mapreduce.Job - map 0% reduce 0%
2019-02-10 15:29:33,028 INFO mapreduce.Job - Job job_local1267180618_0001 failed with state FAILED due to: NA
2019-02-10 15:29:33,038 INFO mapreduce.Job - Counters: 0
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher job did not succeed, job status:FAILED, reason: NA
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher: java.lang.RuntimeException: Fetcher job did not succeed, job status:FAILED, reason: NA
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:503)
at org.apache.nutch.fetcher.Fetcher.run(Fetcher.java:543)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:517)
And I get this error in the console which is the command it runs:
Error running:
/usr/local/nutch/bin/nutch fetch -D mapreduce.job.reduces=2 -D mapred.child.java.opts=-Xmx1000m -D mapreduce.reduce.speculative=false -D mapreduce.map.speculative=false -D mapreduce.map.output.compress=true -D fetcher.timelimit.mins=180 crawlsites/segments/20190210152929 -noParsing -threads 50
I had to delete the nutch folder and do a new install and it worked after this.
I tried to use EclairJS Server following the instructions available here: https://github.com/EclairJS/eclairjs/tree/master/server
after executing: mvn package got the following error:
Tests run: 293, Failures: 8, Errors: 9, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:51 min
[INFO] Finished at: 2018-04-10T07:13:41+00:00
[INFO] Final Memory: 31M/373M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on project eclairjs-nashorn: There are test failures.
[ERROR]
[ERROR] Please refer to /root/eclairjs/server/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.528 sec - in org.eclairjs.nashorn.ZClusterTest
Running org.eclairjs.nashorn.PairRDDTest
Tests run: 8, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< FAILURE! - in org.eclairjs.nashorn.PairRDDTest
countByKey(org.eclairjs.nashorn.PairRDDTest) Time elapsed: 0.582 sec <<< FAILURE!
org.junit.ComparisonFailure: failure - strings are not equal expected:<{"[pandas":1,"coffee":3]}> but was:<{"[coffee":3,"pandas":1]}>
at org.eclairjs.nashorn.PairRDDTest.countByKey(PairRDDTest.java:64)
cogroup2(org.eclairjs.nashorn.PairRDDTest) Time elapsed: 0.73 sec <<< FAILURE!
org.junit.ComparisonFailure: failure - strings are not equal expected:<[{"0":"[Apples","1":{"0":["Fruit"],"1":[3],"2":[42],"length":3},"length":2},{"0":"Oranges","1":{"0":["Fruit","Citrus"],"1":[2],"2":[21]],"length":3},"lengt...> but was:<[{"0":"[Oranges","1":{"0":["Fruit","Citrus"],"1":[2],"2":[21],"length":3},"length":2},{"0":"Apples","1":{"0":["Fruit"],"1":[3],"2":[42]],"length":3},"lengt...>
at org.eclairjs.nashorn.PairRDDTest.cogroup2(PairRDDTest.java:112)
cogroup3(org.eclairjs.nashorn.PairRDDTest) Time elapsed: 0.405 sec <<< FAILURE!
org.junit.ComparisonFailure: failure - strings are not equal expected:<[{"0":"[Apples","1":{"0":["Fruit"],"1":[3],"2":[42],"3":["WA"],"length":4},"length":2},{"0":"Oranges","1":{"0":["Fruit","Citrus"],"1":[2],"2":[21],"3":["FL]"],"length":4},"leng...> but was:<[{"0":"[Oranges","1":{"0":["Fruit","Citrus"],"1":[2],"2":[21],"3":["FL"],"length":4},"length":2},{"0":"Apples","1":{"0":["Fruit"],"1":[3],"2":[42],"3":["WA]"],"length":4},"leng...>
at org.eclairjs.nashorn.PairRDDTest.cogroup3(PairRDDTest.java:124)
ests run: 50, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 94.35 sec <<< FAILURE! - in org.eclairjs.nashorn.MlTest
LDAExample(org.eclairjs.nashorn.MlTest) Time elapsed: 0.005 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from examples/ml/LDA_example.js in /ml/mltest.js at line number 214
at org.eclairjs.nashorn.MlTest.LDAExample(MlTest.java:610)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from examples/ml/LDA_example.js
at org.eclairjs.nashorn.MlTest.LDAExample(MlTest.java:610)
Running org.eclairjs.nashorn.CoreExamplesTest
Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.064 sec <<< FAILURE! - in org.eclairjs.nashorn.CoreExamplesTest
WordCount(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.017 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.WordCount(CoreExamplesTest.java:48)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.WordCount(CoreExamplesTest.java:48)
SparkLR(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.006 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.SparkLR(CoreExamplesTest.java:88)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.SparkLR(CoreExamplesTest.java:88)
SparkPI(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.007 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.SparkPI(CoreExamplesTest.java:76)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.SparkPI(CoreExamplesTest.java:76)
SparkTC(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.006 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.SparkTC(CoreExamplesTest.java:64)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.SparkTC(CoreExamplesTest.java:64)
PageRank(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.008 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.PageRank(CoreExamplesTest.java:100)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.PageRank(CoreExamplesTest.java:100)
LogQuery(org.eclairjs.nashorn.CoreExamplesTest) Time elapsed: 0.007 sec <<< ERROR!
javax.script.ScriptException: TypeError: Cannot load script from eclairjs/sql/sparkSession in file:/root/eclairjs/server/target/classes/eclairjs/jvm-npm/jvm-npm.js at line number 122
at org.eclairjs.nashorn.CoreExamplesTest.LogQuery(CoreExamplesTest.java:115)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot load script from eclairjs/sql/sparkSession
at org.eclairjs.nashorn.CoreExamplesTest.LogQuery(CoreExamplesTest.java:115)
Can please anyone help me to get through this error or can share some to use apache spark in my node application
Thankyou