spark-shell, object XXX is not a member of package YYY - apache-spark

I'm using spark-shell to aid my development of a standalone spark program.
When I run my program via spark-submit the geohex package is correctly imported.
package com.verve.parentchild
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.elasticsearch.spark._
import net.geohex._
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.{FloatType, DateType, StructType, StructField, StringType, IntegerType};
object Sessions {
def main(args: Array[String]) {
But when I try to import the GeoHex package into the spark-shell I get an error
➜ spark git:(master) ✗ spark-shell
16/01/14 07:11:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/14 07:11:29 INFO SecurityManager: Changing view acls to: jspooner
16/01/14 07:11:29 INFO SecurityManager: Changing modify acls to: jspooner
16/01/14 07:11:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jspooner); users with modify permissions: Set(jspooner)
16/01/14 07:11:29 INFO HttpServer: Starting HTTP Server
16/01/14 07:11:29 INFO Utils: Successfully started service 'HTTP class server' on port 55790.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.0
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_05)
Type in expressions to have them evaluated.
Type :help for more information.
16/01/14 07:11:32 INFO SparkContext: Running Spark version 1.4.0
16/01/14 07:11:32 INFO SecurityManager: Changing view acls to: jspooner
16/01/14 07:11:32 INFO SecurityManager: Changing modify acls to: jspooner
16/01/14 07:11:32 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jspooner); users with modify permissions: Set(jspooner)
16/01/14 07:11:32 INFO Slf4jLogger: Slf4jLogger started
16/01/14 07:11:32 INFO Remoting: Starting remoting
16/01/14 07:11:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#192.168.1.130:55793]
16/01/14 07:11:32 INFO Utils: Successfully started service 'sparkDriver' on port 55793.
16/01/14 07:11:32 INFO SparkEnv: Registering MapOutputTracker
16/01/14 07:11:32 INFO SparkEnv: Registering BlockManagerMaster
16/01/14 07:11:32 INFO DiskBlockManager: Created local directory at /private/var/folders/5d/q9yy3dwn2kv6q3xkqqb5dfv80000gp/T/spark-a2b2a7ea-54c8-465f-b9c5-3a7c07571879/blockmgr-61509001-188c-4a31-b417-f593b69753bb
16/01/14 07:11:32 INFO MemoryStore: MemoryStore started with capacity 265.1 MB
16/01/14 07:11:32 INFO HttpFileServer: HTTP File server directory is /private/var/folders/5d/q9yy3dwn2kv6q3xkqqb5dfv80000gp/T/spark-a2b2a7ea-54c8-465f-b9c5-3a7c07571879/httpd-235c0b35-10d6-4f47-848f-b8c43b52a2bd
16/01/14 07:11:32 INFO HttpServer: Starting HTTP Server
16/01/14 07:11:32 INFO Utils: Successfully started service 'HTTP file server' on port 55794.
16/01/14 07:11:32 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/14 07:11:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/01/14 07:11:32 INFO SparkUI: Started SparkUI at http://192.168.1.130:4040
16/01/14 07:11:32 INFO Executor: Starting executor ID driver on host localhost
16/01/14 07:11:32 INFO Executor: Using REPL class URI: http://192.168.1.130:55790
16/01/14 07:11:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55795.
16/01/14 07:11:32 INFO NettyBlockTransferService: Server created on 55795
16/01/14 07:11:32 INFO BlockManagerMaster: Trying to register BlockManager
16/01/14 07:11:32 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55795 with 265.1 MB RAM, BlockManagerId(driver, localhost, 55795)
16/01/14 07:11:32 INFO BlockManagerMaster: Registered BlockManager
16/01/14 07:11:32 INFO SparkILoop: Created spark context..
Spark context available as sc.
16/01/14 07:11:33 INFO HiveContext: Initializing execution hive, version 0.13.1
16/01/14 07:11:33 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/01/14 07:11:33 INFO ObjectStore: ObjectStore, initialize called
16/01/14 07:11:33 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/01/14 07:11:33 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/01/14 07:11:33 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/14 07:11:33 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/14 07:11:34 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/01/14 07:11:34 INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "#" (64), after : "".
16/01/14 07:11:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/14 07:11:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/14 07:11:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/14 07:11:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/14 07:11:35 INFO ObjectStore: Initialized ObjectStore
16/01/14 07:11:35 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
16/01/14 07:11:35 INFO HiveMetaStore: Added admin role in metastore
16/01/14 07:11:35 INFO HiveMetaStore: Added public role in metastore
16/01/14 07:11:35 INFO HiveMetaStore: No user is added in admin role, since config is empty
16/01/14 07:11:36 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/01/14 07:11:36 INFO SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.
scala> import net.geohex._
<console>:19: error: object geohex is not a member of package net
import net.geohex._
^
scala>
Tathagata Das in this article is saying that they separated these non-core functionality into separate subprojects so that their dependencies do not collide/pollute those of of core spark. His example was a guy trying to import the Twitter stream, but I'm just trying to import a simple geoHex class.

Try using --driver-library-path on your command line when executing the shell as explained here.
And you can get similar help using ./spark-shell --help

Related

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`product`' given input columns: [jsontostructs(message)];

C:\Users\sorun\.jdks\openjdk-14.0.1\bin\java.exe "-javaagent:D:\Intellij IDEA\IntelliJ IDEA 2020.1.1\lib\idea_rt.jar=50945:D:\Intellij IDEA\IntelliJ IDEA 2020.1.1\bin" -Dfile.encoding=UTF-8 -classpath C:\Users\sorun\IdeaProjects\spark-streaming-kafka\target\classes;C:\Users\sorun\.m2\repository\org\apache\spark\spark-sql_2.11\2.2.0\spark-sql_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\com\univocity\univocity-parsers\2.2.1\univocity-parsers-2.2.1.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-sketch_2.11\2.2.0\spark-sketch_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-core_2.11\2.2.0\spark-core_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\avro\avro\1.7.7\avro-1.7.7.jar;C:\Users\sorun\.m2\repository\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;C:\Users\sorun\.m2\repository\org\apache\commons\commons-compress\1.4.1\commons-compress-1.4.1.jar;C:\Users\sorun\.m2\repository\org\tukaani\xz\1.0\xz-1.0.jar;C:\Users\sorun\.m2\repository\org\apache\avro\avro-mapred\1.7.7\avro-mapred-1.7.7-hadoop2.jar;C:\Users\sorun\.m2\repository\org\apache\avro\avro-ipc\1.7.7\avro-ipc-1.7.7.jar;C:\Users\sorun\.m2\repository\org\apache\avro\avro-ipc\1.7.7\avro-ipc-1.7.7-tests.jar;C:\Users\sorun\.m2\repository\com\twitter\chill_2.11\0.8.0\chill_2.11-0.8.0.jar;C:\Users\sorun\.m2\repository\com\esotericsoftware\kryo-shaded\3.0.3\kryo-shaded-3.0.3.jar;C:\Users\sorun\.m2\repository\com\esotericsoftware\minlog\1.3.0\minlog-1.3.0.jar;C:\Users\sorun\.m2\repository\org\objenesis\objenesis\2.1\objenesis-2.1.jar;C:\Users\sorun\.m2\repository\com\twitter\chill-java\0.8.0\chill-java-0.8.0.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-client\2.6.5\hadoop-client-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-common\2.6.5\hadoop-common-2.6.5.jar;C:\Users\sorun\.m2\repository\commons-cli\commons-cli\1.2\commons-cli-1.2.jar;C:\Users\sorun\.m2\repository\xmlenc\xmlenc\0.52\xmlenc-0.52.jar;C:\Users\sorun\.m2\repository\commons-httpclient\commons-httpclient\3.1\commons-httpclient-3.1.jar;C:\Users\sorun\.m2\repository\commons-io\commons-io\2.4\commons-io-2.4.jar;C:\Users\sorun\.m2\repository\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;C:\Users\sorun\.m2\repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\sorun\.m2\repository\commons-configuration\commons-configuration\1.6\commons-configuration-1.6.jar;C:\Users\sorun\.m2\repository\commons-digester\commons-digester\1.8\commons-digester-1.8.jar;C:\Users\sorun\.m2\repository\commons-beanutils\commons-beanutils\1.7.0\commons-beanutils-1.7.0.jar;C:\Users\sorun\.m2\repository\commons-beanutils\commons-beanutils-core\1.8.0\commons-beanutils-core-1.8.0.jar;C:\Users\sorun\.m2\repository\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-auth\2.6.5\hadoop-auth-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\directory\server\apacheds-kerberos-codec\2.0.0-M15\apacheds-kerberos-codec-2.0.0-M15.jar;C:\Users\sorun\.m2\repository\org\apache\directory\server\apacheds-i18n\2.0.0-M15\apacheds-i18n-2.0.0-M15.jar;C:\Users\sorun\.m2\repository\org\apache\directory\api\api-asn1-api\1.0.0-M20\api-asn1-api-1.0.0-M20.jar;C:\Users\sorun\.m2\repository\org\apache\directory\api\api-util\1.0.0-M20\api-util-1.0.0-M20.jar;C:\Users\sorun\.m2\repository\org\apache\curator\curator-client\2.6.0\curator-client-2.6.0.jar;C:\Users\sorun\.m2\repository\org\htrace\htrace-core\3.0.4\htrace-core-3.0.4.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-hdfs\2.6.5\hadoop-hdfs-2.6.5.jar;C:\Users\sorun\.m2\repository\org\mortbay\jetty\jetty-util\6.1.26\jetty-util-6.1.26.jar;C:\Users\sorun\.m2\repository\xerces\xercesImpl\2.9.1\xercesImpl-2.9.1.jar;C:\Users\sorun\.m2\repository\xml-apis\xml-apis\1.3.04\xml-apis-1.3.04.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-app\2.6.5\hadoop-mapreduce-client-app-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-common\2.6.5\hadoop-mapreduce-client-common-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-yarn-client\2.6.5\hadoop-yarn-client-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-yarn-server-common\2.6.5\hadoop-yarn-server-common-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-shuffle\2.6.5\hadoop-mapreduce-client-shuffle-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-yarn-api\2.6.5\hadoop-yarn-api-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-core\2.6.5\hadoop-mapreduce-client-core-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-yarn-common\2.6.5\hadoop-yarn-common-2.6.5.jar;C:\Users\sorun\.m2\repository\javax\xml\bind\jaxb-api\2.2.2\jaxb-api-2.2.2.jar;C:\Users\sorun\.m2\repository\javax\xml\stream\stax-api\1.0-2\stax-api-1.0-2.jar;C:\Users\sorun\.m2\repository\org\codehaus\jackson\jackson-jaxrs\1.9.13\jackson-jaxrs-1.9.13.jar;C:\Users\sorun\.m2\repository\org\codehaus\jackson\jackson-xc\1.9.13\jackson-xc-1.9.13.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-jobclient\2.6.5\hadoop-mapreduce-client-jobclient-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\hadoop\hadoop-annotations\2.6.5\hadoop-annotations-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-launcher_2.11\2.2.0\spark-launcher_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-network-common_2.11\2.2.0\spark-network-common_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-network-shuffle_2.11\2.2.0\spark-network-shuffle_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-unsafe_2.11\2.2.0\spark-unsafe_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\net\java\dev\jets3t\jets3t\0.9.3\jets3t-0.9.3.jar;C:\Users\sorun\.m2\repository\org\apache\httpcomponents\httpcore\4.3.3\httpcore-4.3.3.jar;C:\Users\sorun\.m2\repository\org\apache\httpcomponents\httpclient\4.3.6\httpclient-4.3.6.jar;C:\Users\sorun\.m2\repository\javax\activation\activation\1.1.1\activation-1.1.1.jar;C:\Users\sorun\.m2\repository\mx4j\mx4j\3.0.2\mx4j-3.0.2.jar;C:\Users\sorun\.m2\repository\javax\mail\mail\1.4.7\mail-1.4.7.jar;C:\Users\sorun\.m2\repository\org\bouncycastle\bcprov-jdk15on\1.51\bcprov-jdk15on-1.51.jar;C:\Users\sorun\.m2\repository\com\jamesmurty\utils\java-xmlbuilder\1.0\java-xmlbuilder-1.0.jar;C:\Users\sorun\.m2\repository\net\iharder\base64\2.3.8\base64-2.3.8.jar;C:\Users\sorun\.m2\repository\org\apache\curator\curator-recipes\2.6.0\curator-recipes-2.6.0.jar;C:\Users\sorun\.m2\repository\org\apache\curator\curator-framework\2.6.0\curator-framework-2.6.0.jar;C:\Users\sorun\.m2\repository\org\apache\zookeeper\zookeeper\3.4.6\zookeeper-3.4.6.jar;C:\Users\sorun\.m2\repository\com\google\guava\guava\16.0.1\guava-16.0.1.jar;C:\Users\sorun\.m2\repository\javax\servlet\javax.servlet-api\3.1.0\javax.servlet-api-3.1.0.jar;C:\Users\sorun\.m2\repository\org\apache\commons\commons-lang3\3.5\commons-lang3-3.5.jar;C:\Users\sorun\.m2\repository\org\apache\commons\commons-math3\3.4.1\commons-math3-3.4.1.jar;C:\Users\sorun\.m2\repository\com\google\code\findbugs\jsr305\1.3.9\jsr305-1.3.9.jar;C:\Users\sorun\.m2\repository\org\slf4j\slf4j-api\1.7.16\slf4j-api-1.7.16.jar;C:\Users\sorun\.m2\repository\org\slf4j\jul-to-slf4j\1.7.16\jul-to-slf4j-1.7.16.jar;C:\Users\sorun\.m2\repository\org\slf4j\jcl-over-slf4j\1.7.16\jcl-over-slf4j-1.7.16.jar;C:\Users\sorun\.m2\repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\sorun\.m2\repository\org\slf4j\slf4j-log4j12\1.7.16\slf4j-log4j12-1.7.16.jar;C:\Users\sorun\.m2\repository\com\ning\compress-lzf\1.0.3\compress-lzf-1.0.3.jar;C:\Users\sorun\.m2\repository\org\xerial\snappy\snappy-java\1.1.2.6\snappy-java-1.1.2.6.jar;C:\Users\sorun\.m2\repository\net\jpountz\lz4\lz4\1.3.0\lz4-1.3.0.jar;C:\Users\sorun\.m2\repository\org\roaringbitmap\RoaringBitmap\0.5.11\RoaringBitmap-0.5.11.jar;C:\Users\sorun\.m2\repository\commons-net\commons-net\2.2\commons-net-2.2.jar;C:\Users\sorun\.m2\repository\org\scala-lang\scala-library\2.11.8\scala-library-2.11.8.jar;C:\Users\sorun\.m2\repository\org\json4s\json4s-jackson_2.11\3.2.11\json4s-jackson_2.11-3.2.11.jar;C:\Users\sorun\.m2\repository\org\json4s\json4s-core_2.11\3.2.11\json4s-core_2.11-3.2.11.jar;C:\Users\sorun\.m2\repository\org\json4s\json4s-ast_2.11\3.2.11\json4s-ast_2.11-3.2.11.jar;C:\Users\sorun\.m2\repository\org\scala-lang\scalap\2.11.0\scalap-2.11.0.jar;C:\Users\sorun\.m2\repository\org\scala-lang\scala-compiler\2.11.0\scala-compiler-2.11.0.jar;C:\Users\sorun\.m2\repository\org\scala-lang\modules\scala-xml_2.11\1.0.1\scala-xml_2.11-1.0.1.jar;C:\Users\sorun\.m2\repository\org\scala-lang\modules\scala-parser-combinators_2.11\1.0.1\scala-parser-combinators_2.11-1.0.1.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\core\jersey-client\2.22.2\jersey-client-2.22.2.jar;C:\Users\sorun\.m2\repository\javax\ws\rs\javax.ws.rs-api\2.0.1\javax.ws.rs-api-2.0.1.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\hk2-api\2.4.0-b34\hk2-api-2.4.0-b34.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\hk2-utils\2.4.0-b34\hk2-utils-2.4.0-b34.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\external\aopalliance-repackaged\2.4.0-b34\aopalliance-repackaged-2.4.0-b34.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\external\javax.inject\2.4.0-b34\javax.inject-2.4.0-b34.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\hk2-locator\2.4.0-b34\hk2-locator-2.4.0-b34.jar;C:\Users\sorun\.m2\repository\org\javassist\javassist\3.18.1-GA\javassist-3.18.1-GA.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\core\jersey-common\2.22.2\jersey-common-2.22.2.jar;C:\Users\sorun\.m2\repository\javax\annotation\javax.annotation-api\1.2\javax.annotation-api-1.2.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\bundles\repackaged\jersey-guava\2.22.2\jersey-guava-2.22.2.jar;C:\Users\sorun\.m2\repository\org\glassfish\hk2\osgi-resource-locator\1.0.1\osgi-resource-locator-1.0.1.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\core\jersey-server\2.22.2\jersey-server-2.22.2.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\media\jersey-media-jaxb\2.22.2\jersey-media-jaxb-2.22.2.jar;C:\Users\sorun\.m2\repository\javax\validation\validation-api\1.1.0.Final\validation-api-1.1.0.Final.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\containers\jersey-container-servlet\2.22.2\jersey-container-servlet-2.22.2.jar;C:\Users\sorun\.m2\repository\org\glassfish\jersey\containers\jersey-container-servlet-core\2.22.2\jersey-container-servlet-core-2.22.2.jar;C:\Users\sorun\.m2\repository\io\netty\netty-all\4.0.43.Final\netty-all-4.0.43.Final.jar;C:\Users\sorun\.m2\repository\io\netty\netty\3.9.9.Final\netty-3.9.9.Final.jar;C:\Users\sorun\.m2\repository\com\clearspring\analytics\stream\2.7.0\stream-2.7.0.jar;C:\Users\sorun\.m2\repository\io\dropwizard\metrics\metrics-core\3.1.2\metrics-core-3.1.2.jar;C:\Users\sorun\.m2\repository\io\dropwizard\metrics\metrics-jvm\3.1.2\metrics-jvm-3.1.2.jar;C:\Users\sorun\.m2\repository\io\dropwizard\metrics\metrics-json\3.1.2\metrics-json-3.1.2.jar;C:\Users\sorun\.m2\repository\io\dropwizard\metrics\metrics-graphite\3.1.2\metrics-graphite-3.1.2.jar;C:\Users\sorun\.m2\repository\com\fasterxml\jackson\module\jackson-module-scala_2.11\2.6.5\jackson-module-scala_2.11-2.6.5.jar;C:\Users\sorun\.m2\repository\com\fasterxml\jackson\module\jackson-module-paranamer\2.6.5\jackson-module-paranamer-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\ivy\ivy\2.4.0\ivy-2.4.0.jar;C:\Users\sorun\.m2\repository\oro\oro\2.0.8\oro-2.0.8.jar;C:\Users\sorun\.m2\repository\net\razorvine\pyrolite\4.13\pyrolite-4.13.jar;C:\Users\sorun\.m2\repository\net\sf\py4j\py4j\0.10.4\py4j-0.10.4.jar;C:\Users\sorun\.m2\repository\org\apache\commons\commons-crypto\1.0.0\commons-crypto-1.0.0.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-catalyst_2.11\2.2.0\spark-catalyst_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\scala-lang\scala-reflect\2.11.8\scala-reflect-2.11.8.jar;C:\Users\sorun\.m2\repository\org\codehaus\janino\janino\3.0.0\janino-3.0.0.jar;C:\Users\sorun\.m2\repository\org\codehaus\janino\commons-compiler\3.0.0\commons-compiler-3.0.0.jar;C:\Users\sorun\.m2\repository\org\antlr\antlr4-runtime\4.5.3\antlr4-runtime-4.5.3.jar;C:\Users\sorun\.m2\repository\commons-codec\commons-codec\1.10\commons-codec-1.10.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-tags_2.11\2.2.0\spark-tags_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-column\1.8.2\parquet-column-1.8.2.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-common\1.8.2\parquet-common-1.8.2.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-encoding\1.8.2\parquet-encoding-1.8.2.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-hadoop\1.8.2\parquet-hadoop-1.8.2.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-format\2.3.1\parquet-format-2.3.1.jar;C:\Users\sorun\.m2\repository\org\apache\parquet\parquet-jackson\1.8.2\parquet-jackson-1.8.2.jar;C:\Users\sorun\.m2\repository\org\codehaus\jackson\jackson-mapper-asl\1.9.11\jackson-mapper-asl-1.9.11.jar;C:\Users\sorun\.m2\repository\org\codehaus\jackson\jackson-core-asl\1.9.11\jackson-core-asl-1.9.11.jar;C:\Users\sorun\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.6.5\jackson-databind-2.6.5.jar;C:\Users\sorun\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.6.0\jackson-annotations-2.6.0.jar;C:\Users\sorun\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.6.5\jackson-core-2.6.5.jar;C:\Users\sorun\.m2\repository\org\apache\xbean\xbean-asm5-shaded\4.4\xbean-asm5-shaded-4.4.jar;C:\Users\sorun\.m2\repository\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar;C:\Users\sorun\.m2\repository\org\apache\spark\spark-sql-kafka-0-10_2.11\2.2.0\spark-sql-kafka-0-10_2.11-2.2.0.jar;C:\Users\sorun\.m2\repository\org\apache\kafka\kafka-clients\0.10.0.1\kafka-clients-0.10.0.1.jar;C:\Users\sorun\.m2\repository\com\google\code\gson\gson\2.8.3\gson-2.8.3.jar StreamingConsumer
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/06/19 12:39:42 INFO SparkContext: Running Spark version 2.2.0
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/C:/Users/sorun/.m2/repository/org/apache/hadoop/hadoop-auth/2.6.5/hadoop-auth-2.6.5.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/06/19 12:39:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/06/19 12:39:44 INFO SparkContext: Submitted application: Streaming-kafka
20/06/19 12:39:44 INFO SecurityManager: Changing view acls to: OZAN-OKAN
20/06/19 12:39:44 INFO SecurityManager: Changing modify acls to: OZAN-OKAN
20/06/19 12:39:44 INFO SecurityManager: Changing view acls groups to:
20/06/19 12:39:44 INFO SecurityManager: Changing modify acls groups to:
20/06/19 12:39:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(OZAN-OKAN); groups with view permissions: Set(); users with modify permissions: Set(OZAN-OKAN); groups with modify permissions: Set()
20/06/19 12:39:45 INFO Utils: Successfully started service 'sparkDriver' on port 50966.
20/06/19 12:39:45 INFO SparkEnv: Registering MapOutputTracker
20/06/19 12:39:45 INFO SparkEnv: Registering BlockManagerMaster
20/06/19 12:39:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/19 12:39:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/19 12:39:45 INFO DiskBlockManager: Created local directory at C:\Users\sorun\AppData\Local\Temp\blockmgr-0794380e-6e2b-4559-bf6c-7d10c2074bc8
20/06/19 12:39:45 INFO MemoryStore: MemoryStore started with capacity 1040.4 MB
20/06/19 12:39:45 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/19 12:39:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/19 12:39:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
20/06/19 12:39:46 INFO Executor: Starting executor ID driver on host localhost
20/06/19 12:39:46 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50975.
20/06/19 12:39:46 INFO NettyBlockTransferService: Server created on 192.168.56.1:50975
20/06/19 12:39:46 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/06/19 12:39:46 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.1, 50975, None)
20/06/19 12:39:46 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:50975 with 1040.4 MB RAM, BlockManagerId(driver, 192.168.56.1, 50975, None)
20/06/19 12:39:46 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.1, 50975, None)
20/06/19 12:39:46 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.1, 50975, None)
20/06/19 12:39:46 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/C:/Users/sorun/IdeaProjects/spark-streaming-kafka/spark-warehouse/').
20/06/19 12:39:46 INFO SharedState: Warehouse path is 'file:/C:/Users/sorun/IdeaProjects/spark-streaming-kafka/spark-warehouse/'.
20/06/19 12:39:47 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/06/19 12:39:47 INFO CatalystSqlParser: Parsing command: string
20/06/19 12:39:49 INFO SparkSqlParser: Parsing command: CAST(value AS STRING) message
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`product`' given input columns: [jsontostructs(message)];
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4$$anonfun$apply$10.apply(TreeNode.scala:323)
at scala.collection.MapLike$MappedValues$$anonfun$iterator$3.apply(MapLike.scala:246)
at scala.collection.MapLike$MappedValues$$anonfun$iterator$3.apply(MapLike.scala:246)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.IterableLike$$anon$1.foreach(IterableLike.scala:311)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.MapBuilder.$plus$plus$eq(MapBuilder.scala:25)
at scala.collection.TraversableViewLike$class.force(TraversableViewLike.scala:88)
at scala.collection.IterableLike$$anon$1.force(IterableLike.scala:311)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:331)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:279)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:289)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$6.apply(QueryPlan.scala:298)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:298)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:268)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:78)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:78)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.resolveAndBind(ExpressionEncoder.scala:256)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:206)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:170)
at org.apache.spark.sql.Dataset$.apply(Dataset.scala:61)
at org.apache.spark.sql.Dataset.as(Dataset.scala:380)
at StreamingConsumer.main(StreamingConsumer.java:24)
20/06/19 12:39:50 INFO SparkContext: Invoking stop() from shutdown hook
20/06/19 12:39:50 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
20/06/19 12:39:50 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/19 12:39:50 INFO MemoryStore: MemoryStore cleared
20/06/19 12:39:50 INFO BlockManager: BlockManager stopped
20/06/19 12:39:50 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/19 12:39:50 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/19 12:39:50 INFO SparkContext: Successfully stopped SparkContext
20/06/19 12:39:50 INFO ShutdownHookManager: Shutdown hook called
20/06/19 12:39:50 INFO ShutdownHookManager: Deleting directory C:\Users\sorun\AppData\Local\Temp\spark-b70ecbcc-e6cf-4328-9069-97cc41cc72d7
Process finished with exit code 1
CODE
Exception in thread "main" org.apache.spark.sql.AnalysisException:
cannot resolve '`product`' given input columns: [jsontostructs(message)];
Above exception message says the column which you are selecting is not available in DataFrame, rename the column jsontostructs(message) to product & use this column in select.
And if you have "message" field in your model,
add it to schema struct type
StructType schema = new StructType().add("product","string").add("time", DataTypes.TimestampType).add("message", DataTypes.StringType);
Change schema).as("json"))
Dataset<SearchProductModel> data = load.selectExpr("CAST(value AS STRING) as message")
.select(functions.from_json(functions.col("message"), schema).as("json"))
.select("json.*")
.as(Encoders.bean(SearchProductModel.class));

jupyter notebook error when Starting Spark application using pyspark kernel

I've been trying to configure jupyter notebook and pyspark kernel. I am actually new to this and ubuntu os. When I tried to run some code in the jupyter notebook using pyspark kernel, I received the error log below.
Note that it used to work before but without SQL magic. After I installed sparkmagic to use SQL magic, this happened.
Appreciate your help, thanks.
ID YARN Application ID Kind State Spark UI Driver log Current session?
1 None pyspark idle ✔
The code failed because of a fatal error:
Session 1 unexpectedly reached final status 'error'. See logs:
stdout:
stderr:
19/10/12 16:47:57 WARN Utils: Your hostname, majd-desktop resolves to a loopback address: 127.0.1.1; using 192.168.1.6 instead (on interface enp1s0)
19/10/12 16:47:57 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/10/12 16:47:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/10/12 16:48:00 INFO SparkContext: Running Spark version 2.4.4
19/10/12 16:48:00 INFO SparkContext: Submitted application: livy-session-1
19/10/12 16:48:00 INFO SecurityManager: Changing view acls to: majd
19/10/12 16:48:00 INFO SecurityManager: Changing modify acls to: majd
19/10/12 16:48:00 INFO SecurityManager: Changing view acls groups to:
19/10/12 16:48:00 INFO SecurityManager: Changing modify acls groups to:
19/10/12 16:48:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(majd); groups with view permissions: Set(); users with modify permissions: Set(majd); groups with modify permissions: Set()
19/10/12 16:48:00 INFO Utils: Successfully started service 'sparkDriver' on port 33779.
19/10/12 16:48:00 INFO SparkEnv: Registering MapOutputTracker
19/10/12 16:48:00 INFO SparkEnv: Registering BlockManagerMaster
19/10/12 16:48:00 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/10/12 16:48:00 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/10/12 16:48:00 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d9d22c37-be4c-4498-b115-2011ee176dbf
19/10/12 16:48:00 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
19/10/12 16:48:00 INFO SparkEnv: Registering OutputCommitCoordinator
19/10/12 16:48:00 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
19/10/12 16:48:00 INFO Utils: Successfully started service 'SparkUI' on port 4041.
19/10/12 16:48:00 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.6:4041
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/rsc/target/jars/livy-api-0.4.0-incubating-SNAPSHOT.jar at spark://192.168.1.6:33779/jars/livy-api-0.4.0-incubating-SNAPSHOT.jar with timestamp 1570888080918
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/rsc/target/jars/livy-rsc-0.4.0-incubating-SNAPSHOT.jar at spark://192.168.1.6:33779/jars/livy-rsc-0.4.0-incubating-SNAPSHOT.jar with timestamp 1570888080919
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/rsc/target/jars/netty-all-4.0.29.Final.jar at spark://192.168.1.6:33779/jars/netty-all-4.0.29.Final.jar with timestamp 1570888080919
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/repl/scala-2.11/target/jars/commons-codec-1.9.jar at spark://192.168.1.6:33779/jars/commons-codec-1.9.jar with timestamp 1570888080919
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/repl/scala-2.11/target/jars/livy-core_2.11-0.4.0-incubating-SNAPSHOT.jar at spark://192.168.1.6:33779/jars/livy-core_2.11-0.4.0-incubating-SNAPSHOT.jar with timestamp 1570888080920
19/10/12 16:48:00 INFO SparkContext: Added JAR file:///home/majd/anaconda3/share/apache-livy-0.4.0.60ee047/repl/scala-2.11/target/jars/livy-repl_2.11-0.4.0-incubating-SNAPSHOT.jar at spark://192.168.1.6:33779/jars/livy-repl_2.11-0.4.0-incubating-SNAPSHOT.jar with timestamp 1570888080920
19/10/12 16:48:00 INFO Executor: Starting executor ID driver on host localhost
19/10/12 16:48:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38259.
19/10/12 16:48:01 INFO NettyBlockTransferService: Server created on 192.168.1.6:38259
19/10/12 16:48:01 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/10/12 16:48:01 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.6, 38259, None)
19/10/12 16:48:01 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.6:38259 with 366.3 MB RAM, BlockManagerId(driver, 192.168.1.6, 38259, None)
19/10/12 16:48:01 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.6, 38259, None)
19/10/12 16:48:01 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.6, 38259, None).
Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
c) Restart the kernel.

Cannot connect kafka server to spark

I try to stream data from kafka server to spark. As you guess I failed. I use spark 2.2.0 and kafka_2.11-0.11.0.1. I loaded jars to eclipse and runned below code.
package com.defne
import java.nio.ByteBuffer
import scala.util.Random
import org.apache.spark._
import org.apache.spark.streaming.dstream._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import org.apache.log4j.Level
import java.util.regex.Pattern
import java.util.regex.Matcher
import kafka.serializer.StringDecoder
import Utilities._
import org.apache.spark.streaming.kafka
import org.apache.spark.streaming.kafka.KafkaUtils
object KafkaExample {
def main(args: Array[String]) {
val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))
val kafkaParams = Map("metadata.broker.list" -> "kafkaIP:9092", "group.id" -> "console-consumer-9526", "zookeeper.connect" -> "localhost:2181")
val topics = List("logstash_log").toSet
val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics).map(_._2)
lines.print()
ssc.checkpoint("C:/checkpoint/")
ssc.start()
ssc.awaitTermination()
}
}
And I got below output. Interesting thing is no error exists but somehow I cannot connect to the kafka server.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/01 10:16:55 INFO SparkContext: Running Spark version 2.2.0
17/11/01 10:16:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/01 10:16:56 INFO SparkContext: Submitted application: KafkaExample
17/11/01 10:16:56 INFO SecurityManager: Changing view acls to: user
17/11/01 10:16:56 INFO SecurityManager: Changing modify acls to: user
17/11/01 10:16:56 INFO SecurityManager: Changing view acls groups to:
17/11/01 10:16:56 INFO SecurityManager: Changing modify acls groups to:
17/11/01 10:16:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
17/11/01 10:16:58 INFO Utils: Successfully started service 'sparkDriver' on port 53749.
17/11/01 10:16:59 INFO SparkEnv: Registering MapOutputTracker
17/11/01 10:16:59 INFO SparkEnv: Registering BlockManagerMaster
17/11/01 10:16:59 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/01 10:16:59 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/01 10:16:59 INFO DiskBlockManager: Created local directory at C:\Users\user\AppData\Local\Temp\blockmgr-2fa455d5-ef26-4fb9-ba4b-caf9f2fa3a68
17/11/01 10:16:59 INFO MemoryStore: MemoryStore started with capacity 897.6 MB
17/11/01 10:16:59 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/01 10:16:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/01 10:17:00 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
17/11/01 10:17:00 INFO Executor: Starting executor ID driver on host localhost
17/11/01 10:17:00 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53770.
17/11/01 10:17:00 INFO NettyBlockTransferService: Server created on 192.168.56.1:53770
17/11/01 10:17:00 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/01 10:17:00 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.1, 53770, None)
17/11/01 10:17:00 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:53770 with 897.6 MB RAM, BlockManagerId(driver, 192.168.56.1, 53770, None)
17/11/01 10:17:00 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.1, 53770, None)
17/11/01 10:17:00 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.1, 53770, None)
17/11/01 10:17:01 INFO VerifiableProperties: Verifying properties
17/11/01 10:17:01 INFO VerifiableProperties: Property group.id is overridden to console-consumer-9526
17/11/01 10:17:01 INFO VerifiableProperties: Property zookeeper.connect is overridden to localhost:2181
17/11/01 10:17:02 INFO SimpleConsumer: Reconnect due to error:
java.lang.NoSuchMethodError: org.apache.kafka.common.network.NetworkSend.<init>(Ljava/lang/String;Ljava/nio/ByteBuffer;)V
at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:41)
at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:44)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:114)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:88)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:86)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:114)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$getPartitionMetadata$1.apply(KafkaCluster.scala:126)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$getPartitionMetadata$1.apply(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:346)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:342)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.streaming.kafka.KafkaCluster.org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers(KafkaCluster.scala:342)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at com.defne.KafkaExample$.main(KafkaExample.scala:44)
at com.defne.KafkaExample.main(KafkaExample.scala)
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.common.network.NetworkSend.<init>(Ljava/lang/String;Ljava/nio/ByteBuffer;)V
at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:41)
at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:44)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:114)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:101)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:86)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:114)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$getPartitionMetadata$1.apply(KafkaCluster.scala:126)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$getPartitionMetadata$1.apply(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:346)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:342)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.streaming.kafka.KafkaCluster.org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers(KafkaCluster.scala:342)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at com.defne.KafkaExample$.main(KafkaExample.scala:44)
at com.defne.KafkaExample.main(KafkaExample.scala)
17/11/01 10:17:02 INFO SparkContext: Invoking stop() from shutdown hook
17/11/01 10:17:02 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
17/11/01 10:17:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/11/01 10:17:02 INFO MemoryStore: MemoryStore cleared
17/11/01 10:17:02 INFO BlockManager: BlockManager stopped
17/11/01 10:17:02 INFO BlockManagerMaster: BlockManagerMaster stopped
17/11/01 10:17:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/11/01 10:17:02 INFO SparkContext: Successfully stopped SparkContext
17/11/01 10:17:02 INFO ShutdownHookManager: Shutdown hook called
17/11/01 10:17:02 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-a584950c-10ca-422b-990e-fd1980e2260c
Any help will be greatly appreciated.
NOTE
Add hostname:port for Kafka brokers, not Zookeeper
import kafka.serializer.StringDecoder
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka.KafkaUtils
val ssc = new StreamingContext(new SparkConf, Seconds(60))
// hostname:port for Kafka brokers, not Zookeeper
val kafkaParams = Map("metadata.broker.list" -> "localhost:9092,anotherhost:9092")
val topics = Set("sometopic", "anothertopic")
val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)

Connec to Hive from Apache Spark [duplicate]

This question already has answers here:
How to connect Spark SQL to remote Hive metastore (via thrift protocol) with no hive-site.xml?
(11 answers)
Closed 2 years ago.
I have a simple program that I'm running on Standalone Cloudera VM. I have created a managed table in Hive , which I want to read in Apache spark, but the initial connection to hive is not being established. Please advise.
I'm running this program in IntelliJ, I have copied hive-site.xml from my /etc/hive/conf to /etc/spark/conf, even then the spark-job is not connecting to Hive metastore
public static void main(String[] args) throws AnalysisException {
String master = "local[*]";
SparkSession sparkSession = SparkSession
.builder().appName(ConnectToHive.class.getName())
.config("spark.sql.warehouse.dir", "hdfs://quickstart.cloudera:8020/user/hive/warehouse")
.enableHiveSupport()
.master(master).getOrCreate();
SparkContext context = sparkSession.sparkContext();
context.setLogLevel("ERROR");
SQLContext sqlCtx = sparkSession.sqlContext();
HiveContext hiveContext = new HiveContext(sparkSession);
hiveContext.setConf("hive.metastore.warehouse.dir", "hdfs://quickstart.cloudera:8020/user/hive/warehouse");
hiveContext.sql("SHOW DATABASES").show();
hiveContext.sql("SHOW TABLES").show();
sparkSession.close();
}
The output is as below, where is expect to see "Employee table" , so that I can query. Since I'm running on Standa-alone , hive metastore is in Local mySQL server.
+------------+
|databaseName|
+------------+
| default|
+------------+
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
+--------+---------+-----------+
jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true is the configuration for Hive metastore
hive> show databases;
OK
default
sxm
temp
Time taken: 0.019 seconds, Fetched: 3 row(s)
hive> use default;
OK
Time taken: 0.015 seconds
hive> show tables;
OK
employee
Time taken: 0.014 seconds, Fetched: 1 row(s)
hive> describe formatted employee;
OK
# col_name data_type comment
id string
firstname string
lastname string
addresses array<struct<street:string,city:string,state:string>>
# Detailed Table Information
Database: default
Owner: cloudera
CreateTime: Tue Jul 25 06:33:01 PDT 2017
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://quickstart.cloudera:8020/user/hive/warehouse/employee
Table Type: MANAGED_TABLE
Table Parameters:
transient_lastDdlTime 1500989581
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
serialization.format 1
Time taken: 0.07 seconds, Fetched: 29 row(s)
hive>
Added Spark Logs
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/07/25 11:38:30 INFO SparkContext: Running Spark version 2.1.0
17/07/25 11:38:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/07/25 11:38:30 INFO SecurityManager: Changing view acls to: cloudera
17/07/25 11:38:30 INFO SecurityManager: Changing modify acls to: cloudera
17/07/25 11:38:30 INFO SecurityManager: Changing view acls groups to:
17/07/25 11:38:30 INFO SecurityManager: Changing modify acls groups to:
17/07/25 11:38:30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cloudera); groups with view permissions: Set(); users with modify permissions: Set(cloudera); groups with modify permissions: Set()
17/07/25 11:38:31 INFO Utils: Successfully started service 'sparkDriver' on port 55232.
17/07/25 11:38:31 INFO SparkEnv: Registering MapOutputTracker
17/07/25 11:38:31 INFO SparkEnv: Registering BlockManagerMaster
17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/07/25 11:38:31 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-eb1e611f-1b88-487f-b600-3da1ff8353db
17/07/25 11:38:31 INFO MemoryStore: MemoryStore started with capacity 1909.8 MB
17/07/25 11:38:31 INFO SparkEnv: Registering OutputCommitCoordinator
17/07/25 11:38:31 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/07/25 11:38:31 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
17/07/25 11:38:31 INFO Executor: Starting executor ID driver on host localhost
17/07/25 11:38:31 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41433.
17/07/25 11:38:31 INFO NettyBlockTransferService: Server created on 10.0.2.15:41433
17/07/25 11:38:31 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/07/25 11:38:31 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 41433, None)
17/07/25 11:38:31 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:41433 with 1909.8 MB RAM, BlockManagerId(driver, 10.0.2.15, 41433, None)
17/07/25 11:38:31 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 41433, None)
17/07/25 11:38:31 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 41433, None)
17/07/25 11:38:32 INFO SharedState: Warehouse path is 'file:/home/cloudera/works/JsonHive/spark-warehouse/'.
17/07/25 11:38:32 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
17/07/25 11:38:32 INFO deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
17/07/25 11:38:32 INFO deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
17/07/25 11:38:32 INFO deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
17/07/25 11:38:32 INFO deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
17/07/25 11:38:32 INFO deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
17/07/25 11:38:32 INFO deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
17/07/25 11:38:32 INFO deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
17/07/25 11:38:32 INFO deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
17/07/25 11:38:32 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/07/25 11:38:32 INFO ObjectStore: ObjectStore, initialize called
17/07/25 11:38:32 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/07/25 11:38:32 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/07/25 11:38:34 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/07/25 11:38:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/07/25 11:38:35 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery#0" since the connection used is closing
17/07/25 11:38:35 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/07/25 11:38:35 INFO ObjectStore: Initialized ObjectStore
17/07/25 11:38:36 INFO HiveMetaStore: Added admin role in metastore
17/07/25 11:38:36 INFO HiveMetaStore: Added public role in metastore
17/07/25 11:38:36 INFO HiveMetaStore: No user is added in admin role, since config is empty
17/07/25 11:38:36 INFO HiveMetaStore: 0: get_all_databases
17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr cmd=get_all_databases
17/07/25 11:38:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr cmd=get_functions: db=default pat=*
17/07/25 11:38:36 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
17/07/25 11:38:36 INFO SessionState: Created local directory: /tmp/76258222-81db-4ac1-9566-1d8f05c3ecba_resources
17/07/25 11:38:36 INFO SessionState: Created HDFS directory: /tmp/hive/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba
17/07/25 11:38:36 INFO SessionState: Created local directory: /tmp/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba
17/07/25 11:38:36 INFO SessionState: Created HDFS directory: /tmp/hive/cloudera/76258222-81db-4ac1-9566-1d8f05c3ecba/_tmp_space.db
17/07/25 11:38:36 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/cloudera/works/JsonHive/spark-warehouse/
17/07/25 11:38:36 INFO HiveMetaStore: 0: get_database: default
17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr cmd=get_database: default
17/07/25 11:38:36 INFO HiveMetaStore: 0: get_database: global_temp
17/07/25 11:38:36 INFO audit: ugi=cloudera ip=unknown-ip-addr cmd=get_database: global_temp
17/07/25 11:38:36 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
+------------+
|databaseName|
+------------+
| default|
+------------+
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
+--------+---------+-----------+
Process finished with exit code 0
UPDATE
/usr/lib/hive/conf/hive-site.xml was not in the classpath so it was not reading the tables, after adding it in the classpath it worked fine ... Since I was running from IntelliJ I have this problem .. in production the spark-conf folder will have link to hive-site.xml ...
17/07/25 11:38:35 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
This is a hint that you're not connected to the remote hive metastore (that you've set as MySQL), and the XML file is not correctly on your classpath.
You can do it programmatically without XML before you make a SparkSession
System.setProperty("hive.metastore.uris", "thrift://METASTORE:9083");
How to connect to a Hive metastore programmatically in SparkSQL?

Submitting a job to Apache Spark Error

I have the following settings for my Apache Spark instance that runs locally on my machine:
export SPARK_HOME=/Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6
export SPARK_MASTER_IP=127.0.0.1
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_LOCAL_DIRS=$SPARK_HOME/work
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1G
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_DAEMON_MEMORY=384m
I have a spark streaming consumer that I would like to submit to Spark. This streaming consumer is just a jar file that I submit like this:
$SPARK_HOME/bin/spark-submit --class com.my.job.MetricsConsumer --master spark://127.0.0.1:7077 /Users/joe/Sandbox/jaguar/spark-kafka-consumer/target/scala-2.11/spark-kafka-consumer-0.1.0-SNAPAHOT.jar
I get the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/13 10:30:06 INFO SparkContext: Running Spark version 1.6.0
16/01/13 10:30:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/13 10:30:06 INFO SecurityManager: Changing view acls to: joe
16/01/13 10:30:06 INFO SecurityManager: Changing modify acls to: joe
16/01/13 10:30:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(joe); users with modify permissions: Set(joe)
16/01/13 10:30:07 INFO Utils: Successfully started service 'sparkDriver' on port 65528.
16/01/13 10:30:07 INFO Slf4jLogger: Slf4jLogger started
16/01/13 10:30:08 INFO Remoting: Starting remoting
16/01/13 10:30:08 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#172.22.0.104:65529]
16/01/13 10:30:08 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 65529.
16/01/13 10:30:08 INFO SparkEnv: Registering MapOutputTracker
16/01/13 10:30:08 INFO SparkEnv: Registering BlockManagerMaster
16/01/13 10:30:08 INFO DiskBlockManager: Created local directory at /Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6/work/blockmgr-cee3388d-ecfc-42a7-a76c-8738401db0c9
16/01/13 10:30:08 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/01/13 10:30:08 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/13 10:30:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/01/13 10:30:08 INFO SparkUI: Started SparkUI at http://172.22.0.104:4040
16/01/13 10:30:08 INFO HttpFileServer: HTTP File server directory is /Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6/work/spark-10d7d880-7d1d-4234-88d4-d80558c8051a/httpd-40f80936-7508-4b6c-bb90-411aa37d7e93
16/01/13 10:30:08 INFO HttpServer: Starting HTTP Server
16/01/13 10:30:08 INFO Utils: Successfully started service 'HTTP file server' on port 65530.
16/01/13 10:30:09 INFO SparkContext: Added JAR file:/Users/joe/Sandbox/jaguar/spark-kafka-consumer/target/scala-2.11/spark-kafka-consumer-0.1.0-SNAPAHOT.jar at http://172.22.0.104:65530/jars/spark-kafka-consumer-0.1.0-SNAPAHOT.jar with timestamp 1452677409966
16/01/13 10:30:10 INFO AppClient$ClientEndpoint: Connecting to master spark://myhost:7077...
16/01/13 10:30:10 WARN AppClient$ClientEndpoint: Failed to connect to master myhost:7077
java.io.IOException: Failed to connect to myhost:7077
export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m"
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1097)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:438)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:908)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
I have checked my firewall settings and eveything seems to be Ok. Why would I get this error? Any ideas?

Resources