AWS EMR Spark not able to save table in Hive - apache-spark

Spark application is trying to save data to Hive table (S3 configured) but it is failing to access a "/tmp/hive" directory (in S3). I've set the below config and .enableHiveSupport on sparkSession object. All the EMR instance has access to read/write to the below bucket.
spark.sql.warehouse.dir
s3://smt-sessionizer-stage-out/spark/hive/metastore
Error:
23/01/05 17:44:52 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.7-amzn-4 using Spark classes.
23/01/05 17:44:52 INFO HiveConf: Found configuration file file:/etc/spark/conf.dist/hive-site.xml
23/01/05 17:44:52 WARN HiveConf: HiveConf of name hive.server2.thrift.url does not exist
org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.nio.file.AccessDeniedException: tmp/hive/: PUT 0-byte object on tmp/hive/: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 1BPCGWXXYGQ7Y6SE; S3 Extended Request ID: F12HhZZOLPEkok/MNvhXXLSJesYMMvmHCQbN+6XUetg2DzM8Iu1Oe0MjkjM160Mxg54FtEFLKS4=; Proxy: null), S3 Extended Request ID: F12HhZZOLPEkok/MNvhXXLSJesYMMvmHCQbN+6XUetg2DzM8Iu1Oe0MjkjM160Mxg54FtEFLKS4=:AccessDenied
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:135)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:249)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:135)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:125)

Related

Local pyspark can't access s3 files via AWS credential profile or environment variables?

I'm new to pyspark, have installed pyspark and related packages as shown below locally for setting up local dev/test environment for ETL big data stored on AWS S3 buckets.
spark 2.4.5
Scala version 2.11.12
Java HotSpot(TM) 64-Bit Server VM, 1.8.0_231
hadoop-*.jars 2.7.3 (default hadoop jars that come with spark 2.4.5 package)
2 dependent jars downloaded to $SPARK_HOME/jars/: aws-java-sdk-1.7.4.jar, hadoop-aws-2.7.3.jar
For accessing AWS from local SDK: ~/.aws/credentials file which stores multiple profiles, the [default] profile stores: aws_access_key_id, aws_secret_access_key, and aws_session_token as a fedarated user.
Ideally would like to use spark conf : DefaultAWSCredentialsProviderChain to retrieve credentials from profile, but it seems doesn't work(: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain). Then manually extract credentials from that file and feed them to spark conf: TemporaryAWSCredentialsProvider (as shown in the code below) or as environment variables in spark-env.sh conf files, but got Status Code: 403 Forbidden.
Here is some detail of the code and errors:
The following script tries to read a csv from AWS s3 bucket via the s3a scheme:
config = pyspark.SparkConf().setAll([
('fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem')
,("fs.s3a.aws.credentials.provider","com.amazonaws.auth.TemporaryAWSCredentialsProvider")
,("fs.s3a.access.key", "xxxx")
,("fs.s3a.secret.key", "xxxx")
,("fs.s3a.session.token", "xxxx")
])
sc = pyspark.SparkContext(conf=config)
sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true")
sql = SQLContext(sc)
df = (sql.read
.format("csv")
.option("header", "true")
.load("s3a://bucket/prefix/file_name"))
It gives errors as follow, but the key, secret and token are valid as checked by boto3:
py4j.protocol.Py4JJavaError: An error occurred while calling o130.load.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxx, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: xxxx
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:355)
at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:835)
I've also tried the following settings:
("fs.s3a.aws.credentials.provider",
"com.amazonaws.auth.DefaultAWSCredentialsProviderChainAWSCredentialsProvider")
it gives errors:
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
set environment variables in spark-env.sh: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, which are valid for accessing s3 as tested by boto3, it gives same errors as using TemporaryAWSCredentialsProvider:
py4j.protocol.Py4JJavaError: An error occurred while calling o130.load.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxx, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: xxxx
tried other versions of aws-java-jdk.jar and hadoop-aws.jar
native ones from latest hadoop installation 3.2.1:
aws-java-sdk-bundle-1.11.375.jar, hadoop-aws-3.2.1.jar, hadoop-common-3.2.1.jar
or the jars from hadoop 2.8.5 version
it gives "NoSuchMethod" errors:
py4j.protocol.Py4JJavaError: An error occurred while calling o86.load.
: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
Really struggling here and would you please suggest what might be wrong here and what is the possible solution?
Much appreciated in advance

Databricks 6.1 no database named global_temp error when initializing metastore connection

When initializing hive metastore connection (saving data frame as a table for the first time ) on cluster 6.1 (includes Apache Spark 2.4.4, Scala 2.11) (Azure), I can see health check for database global_temp failing with the error:
20/02/18 12:11:17 INFO HiveUtils: Initializing HiveMetastoreConnection version 0.13.0 using file:
...
20/02/18 12:11:21 INFO HiveMetaStore: 0: get_database: global_temp
20/02/18 12:11:21 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: global_temp
20/02/18 12:11:21 ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)
at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:487)
at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:498)
...
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:430)
...
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
This doesn't cause python script to fail, but pollutes logs.
Shouldn't the global_temp database be automatically created?
Can the check be switched off? or the error suppressed?

Issue with AWS Glue Data Catalog as Metastore for Spark SQL on EMR

I am having an AWS EMR cluster (v5.11.1) with Spark(v2.2.1) and trying to use AWS Glue Data Catalog as its metastore. As per guidelines provided in official AWS documentation (reference link below), I have followed the steps but I am facing some discrepancy with regards to accessing the Glue Catalog DB/Tables. Both EMR Cluster & AWS Glue are in the same account and appropriate IAM permissions have been provided.
AWS Documentation : https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-glue.html
Observations:
- Using spark-shell (From EMR Master Node):
Works. Able to access Glue DB/Tables using below commands:
spark.catalog.setCurrentDatabase("test_db")
spark.catalog.listTables
- Using spark-submit (From EMR Step):
Does not work. Keep getting the error "Database 'test_db' does not exist"
Error Trace is as below:
INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is hdfs:///user/spark/warehouse
INFO HiveMetaStore: 0: get_database: default
INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
INFO HiveMetaStore: 0: get_database: global_temp
INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: global_temp
WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
INFO SessionState: Created local directory: /mnt3/yarn/usercache/hadoop/appcache/application_1547055968446_0005/container_1547055968446_0005_01_000001/tmp/6d0f6b2c-cccd-4e90-a524-93dcc5301e20_resources
INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/6d0f6b2c-cccd-4e90-a524-93dcc5301e20
INFO SessionState: Created local directory: /mnt3/yarn/usercache/hadoop/appcache/application_1547055968446_0005/container_1547055968446_0005_01_000001/tmp/yarn/6d0f6b2c-cccd-4e90-a524-93dcc5301e20
INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/6d0f6b2c-cccd-4e90-a524-93dcc5301e20/_tmp_space.db
INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is hdfs:///user/spark/warehouse
INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
INFO CodeGenerator: Code generated in > 191.063411 ms
INFO CodeGenerator: Code generated in 10.27313 ms
INFO HiveMetaStore: 0: get_database: test_db
INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test_db
WARN ObjectStore: Failed to get database test_db, returning NoSuchObjectException
org.apache.spark.sql.AnalysisException: Database 'test_db' does not exist.;
at org.apache.spark.sql.internal.CatalogImpl.requireDatabaseExists(CatalogImpl.scala:44)
at org.apache.spark.sql.internal.CatalogImpl.setCurrentDatabase(CatalogImpl.scala:64)
at org.griffin_test.GriffinTest.ingestGriffinRecords(GriffinTest.java:97)
at org.griffin_test.GriffinTest.main(GriffinTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:635)
After lot of research and going through many suggestions in blogs, I have tried the below fixes but of no avail and we are still facing the discrepancy.
Reference Blogs:
https://forums.aws.amazon.com/thread.jspa?threadID=263860
Spark Catalog w/ AWS Glue: database not found
https://okera.zendesk.com/hc/en-us/articles/360005768434-How-can-we-configure-Spark-to-use-the-Hive-Metastore-for-metadata-
Fixes Tried:
- Enabling Hive support in spark-defaults.conf & SparkSession (Code):
Hive classes are on CLASSPATH and have set spark.sql.catalogImplementation internal configuration property to hive:
spark.sql.catalogImplementation hive
Adding Hive metastore config:
.config("hive.metastore.connect.retries", 15)
.config("hive.metastore.client.factory.class", "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory")
Code Snippet:
SparkSession spark = SparkSession.builder().appName("Test_Glue_Catalog")
.config("spark.sql.catalogImplementation", "hive")
.config("hive.metastore.connect.retries", 15)
.config("hive.metastore.client.factory.class","com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory")
.enableHiveSupport()
.getOrCreate();
Any suggestions in figuring out the root cause for this discrepancy would be really helpful.
Appreciate your help! Thank you!

new Spark StreamingContext failes with hdfs errors

I'm using dcos installed via Azure ACS and installed hdfs and spark via dcos tool with default options.
Creating a SparkStreamingContext gives:
16/07/22 01:51:04 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
16/07/22 01:51:04 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
Exception in thread "main" java.lang.IllegalArgumentException:
java.net.UnknownHostException: namenode1.hdfs.mesos
I expect I have to redeploy the spark package with dcos package install with –options= but can't figure out what the hdfs.config-url should be. The https://docs.mesosphere.com/1.7/usage/service-guides/spark/install/#hdfs docs seem out of date.
Yes, it is out of date. We'll fix that.
DC/OS HDFS now serves its config on http://hdfs.marathon.mesos:[port]/v1/connect

trouble in adding spark-csv package in Cloudera VM

I am using Cloudera quickstart VM to test out some pyspark work. For one task, I need to add spark-csv package. And here is what I did:
PYSPARK_DRIVER_PYTHON=ipython pyspark -- packages com.databricks:spark-csv_2.10:1.3.0
pyspark started up fine, however I did get warnings as:
**16/02/09 17:41:22 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0)
16/02/09 17:41:22 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/02/09 17:41:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable**
then I ran my code in pyspark:
yelp_df = sqlCtx.load(
source="com.databricks.spark.csv",
header = 'true',
inferSchema = 'true',
path = 'file:///directory/file.csv')
But I am getting an error message:
Py4JJavaError: An error occurred while calling o19.load.: java.lang.RuntimeException: Failed to load class for data source: com.databricks.spark.csv at scala.sys.package$.error(package.scala:27)
What could have gone wrong?? Thanks in advance for your help.
Try this
PYSPARK_DRIVER_PYTHON=ipython pyspark --packages com.databricks:spark-csv_2.10:1.3.0
Without the space, there's a typo.

Resources