Failed to find data source: com.stratio.datasource.mongodb - apache-spark

I read all different issues from other Stratio packages, but I couldn't solve my problem.
When I try with the:
"./bin/spark-submit --packages com.stratio.datasource:spark-mongodb_2.11:0.12.0 "
or
"./bin/spark-submit --jars /home/user/Spark-MongoDB/spark-mongodb_2.11/target/spark-mongodb_2.11-0.12.1-RC1-SNAPSHOT.jar" .
I've been struggling with this for the past two days, what is it Im doing wrong? I'm using spark 2.0.0, Ubuntu 14.04.

This is the command I am using on my test machine and it is working fine.
spark-submit --packages com.stratio.datasource:spark-mongodb_2.11:0.12.0 --master local[1] Cell.py
I have the same environment i.e. Ubuntu 14.04, scala 2.11 and spark 2.0.0
I am trying to write data to mongodb using my python program and it is working as expected.

Related

Pyspark: - Failed to initialise Spark session (Another SparkContext is being constructed)

HI I am pretty new to spark i want to use pyspark to stream data from Kafka to mongo but i am not able to run pyspark. and every-time i run it on terminal it gives following error.I have deleted and reinstalled Java Kafka Scala and pyspark multiple times but unable to resolve it found few methods tried to do them but unable to get it resolved. If it run spark shell on terminal it works while giving warning
and here is my pyspark and java version that i have right now:
If you have solution on this please help me with it i have stuck a wall with this error.
Hey guys if you are facing the same issue you can do what i did.
i removed all spark and scala and java also pyspark
reinstall brew reinstall apache-spark
after that you cam use pyspark or spark-shell to run it again.
it worked for me because my spark-shell was also giving an error so reinstalling apache spark was able to solve it.

spark-cassnadra connector issue

I am using spark 1.6.2 with scala version 2.10.5.
Now I have installed cassndra locally and downloaded spark-cassandra-connector_2.10-1.6.2.jar from https://spark-packages.org/package/datastax/spark-cassandra-connector
But when I am trying to fire up the spark shell from the cassandra using the connector I am getting this error
can some one please help me if I am downloading the wrong version of the connector or there are some other issues?
Just put : between spark-cassandra-connector and 1.6.2 instead of _, and remove the ; character after the version of connector...
spark-shell --packages datastax:spark-cassandra-connector:1.6.2-s_2.10
But it's better to use latest from 1.6.x release: 1.6.11 instead of 1.6.2

how to setup pyspark with zeppelin on windows 10

I have had difficulties installing Zeppelin 0.7.2
Using the Zeppelin version 0.7.2 of spark that comes with it, I can run spark code, but I am unable to run %pyspark code even after modifying python environment variables to point to where python is installed (python was installed using anaconda).
%python code works fine.
If anyone can help resolve this issue I would be grateful. (The odd thing is I have done the same installation on another windows 10 laptop and pyspark does execute.)
The error I get is that: pyspark is not responding

Spark notebook in Hue 3.11

I am trying to setup Spark notebook in HUE(version 3.11) with Spark 2.0.0 using Livy 0.2.0.
With Spark 1.6.1 the notebook is working perfectly fine.
Livy only supports Scala 2.10 builds of Spark.So I did a build of Spark-2.0.0 with Scala-2.10.6.When I open up spark-shell(2.0.0) it clears says "Using Scala version 2.10.6".
But Spark notebook is not working with this build.In the Spark notebook when I do 1+1 and execute it , it gives the following error.
What could be wrong here?Below is the exception in the logs
"java.util.concurrent.ExecutionException: com.cloudera.livy.rsc.rpc.RpcException: java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.render(Lorg/json4s/JsonAST$JValue;)Lorg/json4s/JsonAST$JValue;\ncom.cloudera.livy.repl.ReplDriver$$anonfun$handle$2.apply(ReplDriver.scala:78)\ncom.cloudera.livy.repl.ReplDriver$$anonfun$handle$2.apply(ReplDriver.scala:78)\nscala.Option.map(Option.scala:145)\ncom.cloudera.livy.repl.ReplDriver.handle(ReplDriver.scala:78)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:606)\ncom.cloudera.livy.rsc.rpc.RpcDispatcher.handleCall(RpcDispatcher.java:130)\ncom.cloudera.livy.rsc.rpc.RpcDispatcher.channelRead0(RpcDispatcher.java:77)\nio.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)\nio.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)\nio.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)\nio.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)\nio.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)\nio.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)\nio.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)\nio.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)\njava.lang.Thread.run(Thread.java:745)" (error 500)
This solved my problem.
Download latest Livy code from Git hub.Use the below maven build command
mvn clean package -DskipTests -Dspark-2.0 -Dscala-2.11
I'm not sure that this is even possible.
Relying on release notes, Hue 3.11 not works with Spark 2.0 (it works with Spark 1.6).

Connecting SparkR to the spark cluster

I have a spark cluster running on 10 machines (1 - 10) with the master at machine 1. All of these run on CentOS 6.4.
I am trying to connect a jupyterhub installation (which is running inside a ubuntu docker because of issues with installing on CentOS), using sparkR, to the cluster and get the spark context.
The code I am using is
Sys.setenv(SPARK_HOME="/usr/local/spark-1.4.1-bin-hadoop2.4")
library(SparkR)
sc <- sparkR.init(master="spark://<master-ip>:7077")
The output I get is
attaching package: ‘SparkR’
The following object is masked from ‘package:stats’:
filter
The following objects are masked from ‘package:base’:
intersect, sample, table
Launching java with spark-submit command spark-submit sparkr-shell/tmp/Rtmpzo6esw/backend_port29e74b83c7b3 Error in sparkR.init(master = "spark://10.10.5.51:7077"): JVM is not ready after 10 seconds
Error in sparkRSQL.init(sc): object 'sc' not found
I am using Spark 1.4.1. The spark cluster is also running CDH 5.
The jupyterhub installation can connect to the cluster via pyspark and I have python notebooks which use pyspark.
Can someone tell me what I am doing wrong?
I have a similar problem and have searching all around but no solutions. Can you please tell me what do you mean by "jupyterhub installation (which is running inside a ubuntu docker because of issues with installing on CentOS), "?
We have 4 clusters too on CentOS 6.4. One of my other problem is that how do use an IDE like IPython or RStudio to interact with these 4 servers? Do I use my laptop to connect to these servers remotely (if yes, then how?) and if no then what can be the other solution.
Now to answer your question, I can give it a try. I think the you have to use --yarn-cluster option as stated here I hope this helps you solving the problem.
Cheers,
Ashish

Resources