I'm trying to build a very simple scala standalone app using the Mllib, but I get the following error when trying to bulid the program:
Object Mllib is not a member of package org.apache.spark
Please look at the package -
import org.apache.spark.mllib._
And follow the guide here.
https://spark.apache.org/docs/1.1.0/mllib-guide.html
Related
How to use external jar lib function and method in spark application (jupyter notebook)
Example I was trying to use ua-parser as external lib to parse user agent string,
So in my jupyter notebook I have added that jar file in spark config using
SparkSession.builder.appName(sparkAppName).master(sparkMaster)
.config("spark.jars", "my_hdfs_jar_path/ua_parser1.3.0.jar")
how can someone use class/method from such external jar into pyspark/spark sql code?
In Python or Java app, someone can easily use methods and classes from external jar by just adding it into application class path and importing classnames
from ua_parser import user_agent_parser
In java we can add jar as external jar and we can use methods/functions just doing
import ua_parser.Parser
Note - I have given a ua_parse just for a example purpose
I have created one spark application to integrate with kafka and get stream of data from kafka.
But, when i try to import import org.apache.spark.streaming.kafka._ an error occur that Cannot resolve symbol kafka so what should i do to import this library
Depending on your Spark and Scala version you need to include the spark-kafka integration library to your dependencies.
Spark Structured Streaming
If you plan to use Spark Structured Streaming you need to add the following to your dependencies as described here:
For Scala/Java applications using SBT/Maven project definitions, link your application with the following artifact:
groupId = org.apache.spark
artifactId = spark-sql-kafka-0-10_2.12
version = 3.0.1
Please note that to use the headers functionality, your Kafka client version should be version 0.11.0.0 or up. For Python applications, you need to add this above library and its dependencies when deploying your application. See the Deploying subsection below. For experimenting on spark-shell, you need to add this above library and its dependencies too when invoking spark-shell. Also, see the Deploying subsection below.
Spark Streaming
If you plan to work Spark Streaming (Direct API) you can follow the guidance given here:
For Scala/Java applications using SBT/Maven project definitions, link your streaming application with the following artifact (see Linking section in the main programming guide for further information).
groupId = org.apache.spark
artifactId = spark-streaming-kafka-0-10_2.12
version = 3.0.1
Using AWS glue developer endpoint Spark Version - 2.4 Python Version- 3
Code:
df=spark.read.format("avro").load("s3://dataexport/users/prod-users.avro")
Getting the following error message while trying to read avro file:
Failed to find data source: avro. Avro is built-in but external data source module since Spark 2.4. Please deploy the application as per the deployment section of "Apache Avro Data Source Guide".;
Found the following links, but not helpful to resolve my issue
https://spark.apache.org/docs/latest/sql-data-sources-avro.html[Apache Avro Data Source Guide][1]
Apache Avro as a Built-in Data Source in Apache Spark 2.4
You just need to import that package
org.apache.spark:spark-avro_2.11:4.0.0
Check which version you need here
Have you imported the package while starting the shell? If not you need to start a shell as below. Below package is applicable for spark 2.4+ version.
pyspark --packages com.databricks:spark-avro_2.11:4.0.0
Also write as below inside read.format:
df=spark.read.format("com.databricks.spark.avro").load("s3://dataexport/users/prod-users.avro")
Note: For pyspark you need to write 'com.databricks.spark.avro' instead of 'avro'.
Pretty simple objective. Load my custom/local jars from s3 to zeppelin notebook (using zeppelin from AWS EMR).
Location of the Jar
s3://my-config-bucket/process_dataloader.jar
Following zeppelin documentation I opened the interpreter like in the following image and spark.jars in the properties name and its value is s3://my-config-bucket/process_dataloader.jar
I restarted the interpreter and then in the notebook I tried to import the jar using the following
import com.org.dataloader.DataLoader
but it throws the following
<console>:23: error: object org is not a member of package com
import com.org.dataloader.DataLoader
Any suggestions for solving this problem?
A bit late thought but for anyone else who might need this in future try below option,
https://bucket/dev/jars/RedshiftJDBC41-1.2.12.1017.jar" is basically your s3 object url.
%spark.dep
z.reset()
z.load("https://bucket/dev/jars/RedshiftJDBC41-1.2.12.1017.jar")
I added some methods to the Breeze library and I can see those methods through IDE. And I was trying to add the Breeze library which is build by myself to my project which is based on Apache Spark. However, when I package my project by command "sbt assembly" and run it on my cluster, it throws an error "no such method xxx" which means the cluster actually didn't run my Breeze library. So could anyone tell me how to make the cluster run the Breeze library which is build by myself?
I have a guess that spark uses some version of Breeze libraries itself and prefer them over you custom .jars in assembly. You can try to build spark with your custom library. Install your library in your local maven repository, specify it in apache spark's pom.xml and build your own spark version.