Spark spark.sql.session.timeZone doesn't work with JSON source - apache-spark

Does Spark v2.3.1 depends on local timezone when reading from JSON file?
My src/test/resources/data/tmp.json:
[
{
"timestamp": "1970-01-01 00:00:00.000"
}
]
and Spark code:
SparkSession.builder()
.appName("test")
.master("local")
.config("spark.sql.session.timeZone", "UTC")
.getOrCreate()
.read()
.option("multiLine", true).option("mode", "PERMISSIVE")
.schema(new StructType()
.add(new StructField("timestamp", DataTypes.TimestampType, true, Metadata.empty())))
.json("src/test/resources/data/tmp.json")
.show();
Result:
+-------------------+
| timestamp|
+-------------------+
|1969-12-31 22:00:00|
+-------------------+
How to make spark return 1970-01-01 00:00:00.000?
P.S. This question is not a duplicate of Spark Strutured Streaming automatically converts timestamp to local time, because provided there solution not work for me and is already included (see .config("spark.sql.session.timeZone", "UTC")) into my question.

Related

How to connect to Glue catalog from an EMR spark-submit step

We use pyspark in an EMR cluster to run queries against our glue database. We use two methods to execute the python scripts namely through a Zeppelin notebook and through EMR steps. Connecting to the glue database works greats in Zeppelin, but not in EMR steps. When we run a query against the glue database, we get the following error:
pyspark.sql.utils.AnalysisException: Database '{glue_database_name}' does not exist.
This is the spark configuration used in the executed .py file:
spark = SparkSession\
.builder\
.appName("dfp.sln.kunderelation.work")\
.config("spark.sql.broadcastTimeout", "36000")\
.config("spark.sql.legacy.parquet.int96RebaseModeInRead", "CORRECTED")\
.config("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")\
.config("spark.sql.legacy.parquet.datetimeRebaseModeInRead", "CORRECTED")\
.config("hive.metastore.client.factory.class", "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory") \
.enableHiveSupport() \
.getOrCreate()
spark.conf.set("spark.sql.sources.ignoreDataLocality.enabled", "true")
The step is submitted using boto3 with the following configuration:
response = emr_client.add_job_flow_steps(
JobFlowId=cluster_id,
Steps=[
{ 'Name': name,
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': ['spark-submit', '--deploy-mode', 'client', '--master', 'yarn', "{path to .py script}"]
}
},
]
)
Both types of EC2 users have been given glue and s3 privileges.
What do we need to setup to connect to the glue database?

PySpark write data to Ceph returns 400 Bad Request

I have a problem with pySpark configuration when writing data inside a ceph bucket.
With the following Python code snippet I can read data from the Ceph bucket but when I try to write inside the bucket, I get the following error:
22/07/22 10:00:58 DEBUG S3ErrorResponseHandler: Failed in parsing the error response :
org.apache.hadoop.shaded.com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
at org.apache.hadoop.shaded.com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:701)
at org.apache.hadoop.shaded.com.ctc.wstx.sr.BasicStreamReader.handleEOF(BasicStreamReader.java:2217)
at org.apache.hadoop.shaded.com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2123)
at org.apache.hadoop.shaded.com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1179)
at com.amazonaws.services.s3.internal.S3ErrorResponseHandler.createException(S3ErrorResponseHandler.java:122)
at com.amazonaws.services.s3.internal.S3ErrorResponseHandler.handle(S3ErrorResponseHandler.java:71)
at com.amazonaws.services.s3.internal.S3ErrorResponseHandler.handle(S3ErrorResponseHandler.java:52)
[...]
22/07/22 10:00:58 DEBUG request: Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: null; S3 Extended Request ID: null; Proxy: null), S3 Extended Request ID: null
22/07/22 10:00:58 DEBUG AwsChunkedEncodingInputStream: AwsChunkedEncodingInputStream reset (will reset the wrapped stream because it is mark-supported).
Pyspark code (not working):
from pyspark.sql import SparkSession
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages com.amazonaws:aws-java-sdk-bundle:1.12.264,org.apache.spark:spark-sql-kafka-0-10_2.13:3.3.0,org.apache.hadoop:hadoop-aws:3.3.3 pyspark-shell"
spark = (
SparkSession.builder.appName("app") \
.config("spark.hadoop.fs.s3a.access.key", access_key) \
.config("spark.hadoop.fs.s3a.secret.key", secret_key) \
.config("spark.hadoop.fs.s3a.connection.timeout", "10000") \
.config("spark.hadoop.fs.s3a.endpoint", "http://HOST_NAME:88") \
.config("spark.hadoop.fs.s3a.connection.ssl.enabled", "false") \
.config("spark.hadoop.fs.s3a.path.style.access", "true") \
.config("spark.hadoop.fs.s3a.endpoint.region", "default") \
.getOrCreate()
)
spark.sparkContext.setLogLevel("TRACE")
# This works
spark.read.csv("s3a://test-data/data.csv")
# This throws the provided error
df_to_write = spark.createDataFrame([{"a": "x", "b": "y", "c": "3"}])
df_to_write.write.csv("s3a://test-data/with_love.csv")
Also, referring to the same ceph bucket, I am able to read and write data to the bucket via boto3:
import boto3
from botocore.exceptions import ClientError
from botocore.client import Config
config = Config(connect_timeout=20, retries={'max_attempts': 0})
s3_client = boto3.client('s3', config=config,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name="defaut",
endpoint_url='http://HOST_NAME:88',
verify=False
)
response = s3_client.list_buckets()
# Read
print('Existing buckets:')
for bucket in response['Buckets']:
print(f' {bucket["Name"]}')
# Write
dummy_data = b'Dummy string'
s3_client.put_object(Body=dummy_data, Bucket='test-spark', Key='awesome_key')
Also s3cmd with the same configuration is working fine.
I think I'm missing some pyspark (hadoop-aws) configuration, could anyone help me in identifying the configuration problem? Thanks.
After some research on the web, I was able to solve the problem using this hadoop-aws configuration:
fs.s3a.signing-algorithm: S3SignerType
I configured this property in pySpark with:
spark = (
SparkSession.builder.appName("app") \
.config("spark.hadoop.fs.s3a.access.key", access_key) \
.config("spark.hadoop.fs.s3a.secret.key", secret_key) \
.config("spark.hadoop.fs.s3a.connection.timeout", "10000") \
.config("spark.hadoop.fs.s3a.endpoint", "http://HOST_NAME:88") \
.config("spark.hadoop.fs.s3a.connection.ssl.enabled", "false") \
.config("spark.hadoop.fs.s3a.path.style.access", "true") \
.config("spark.hadoop.fs.s3a.endpoint.region", "default") \
.config("spark.hadoop.fs.s3a.signing-algorithm", "S3SignerType") \
.getOrCreate()
)
From what I understand, the version of ceph I am using (16.2.3), does not support the default signing algorithm used in Spark version v3.3.0 on hadoop version 3.3.2.
For further details see this documentation.

EMR Hudi cannot create hive connection jdbc:hive2://localhost:10000/

Trying to save hudi table in Jupyter notebook with hive-sync enabled. I am using EMR: 5.28.0 with AWS Glue as catalog enabled:
# Create a DataFrame
inputDF = spark.createDataFrame(
[
("100", "2015-01-01", "2015-01-01T13:51:39.340396Z"),
("101", "2015-01-01", "2015-01-01T12:14:58.597216Z"),
("102", "2015-01-01", "2015-01-01T13:51:40.417052Z"),
("103", "2015-01-01", "2015-01-01T13:51:40.519832Z"),
("104", "2015-01-02", "2015-01-01T12:15:00.512679Z"),
("105", "2015-01-02", "2015-01-01T13:51:42.248818Z"),
],
["id", "creation_date", "last_update_time"]
)
# Specify common DataSourceWriteOptions in the single hudiOptions variable
hudiOptions = {
'hoodie.table.name': 'my_hudi_table',
'hoodie.datasource.write.recordkey.field': 'id',
'hoodie.datasource.write.partitionpath.field': 'creation_date',
'hoodie.datasource.write.precombine.field': 'last_update_time',
'hoodie.datasource.hive_sync.enable': 'true',
'hoodie.datasource.hive_sync.table': 'my_hudi_table',
'hoodie.datasource.hive_sync.partition_fields': 'creation_date',
'hoodie.datasource.hive_sync.partition_extractor_class': 'org.apache.hudi.hive.MultiPartKeysValueExtractor'
}
# Write a DataFrame as a Hudi dataset
(inputDF.write
.format('org.apache.hudi')
.option('hoodie.datasource.write.operation', 'insert')
.options(**hudiOptions)
.mode('overwrite')
.save('s3://dytyniak-test-data/myhudidataset/'))
receiving the following error:
An error occurred while calling o309.save.
: org.apache.hudi.hive.HoodieHiveSyncException: Cannot create hive connection jdbc:hive2://localhost:10000/
I assume you are following the tutorial from AWS documentation. I got it to work using Hudi 0.9.0 by setting hive_sync.mode to hms in hudiOptions (see hudi docs):
hudiOptions = {
'hoodie.table.name': 'my_hudi_table',
'hoodie.datasource.write.recordkey.field': 'id',
'hoodie.datasource.write.partitionpath.field': 'creation_date',
'hoodie.datasource.write.precombine.field': 'last_update_time',
'hoodie.datasource.hive_sync.enable': 'true',
'hoodie.datasource.hive_sync.table': 'my_hudi_table',
'hoodie.datasource.hive_sync.partition_fields': 'creation_date',
'hoodie.datasource.hive_sync.partition_extractor_class':
'org.apache.hudi.hive.MultiPartKeysValueExtractor',
'hoodie.datasource.hive_sync.mode': 'hms'
}

How do I use spark xml data source in .net?

Is there a way to use spark-xml (https://github.com/databricks/spark-xml) in a spark .net/c# job?
I was able to use spark-xml data source from .Net.
Here is the test program:
using Microsoft.Spark.Sql;
namespace MySparkApp
{
class Program
{
static void Main(string[] args)
{
SparkSession spark = SparkSession
.Builder()
.AppName("spark-xml-example")
.GetOrCreate();
DataFrame df = spark.Read()
.Option("rowTag", "book")
.Format("xml")
.Load("books.xml");
df.Show();
df.Select("author", "_id")
.Write()
.Format("xml")
.Option("rootTag", "books")
.Option("rowTag", "book")
.Save("newbooks.xml");
spark.Stop();
}
}
}
Checkout https://github.com/databricks/spark-xml and build an assembly jar using 'sbt assembly' command, copy the assembly jar to the dotnet project workspace.
Build project: dotnet build
Submit Spark job:
$SPARK_HOME/bin/spark-submit \
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--jars scala-2.11/spark-xml-assembly-0.10.0.jar \
--master local bin/Debug/netcoreapp3.1/microsoft-spark-2.4.x-0.10.0.jar \
dotnet bin/Debug/netcoreapp3.1/sparkxml.dll

How to read data from HBase table using pyspark?

I have created a dummy HBase table called emp having one record. Below is the data.
> hbase(main):005:0> put 'emp','1','personal data:name','raju' 0 row(s)
> in 0.1540 seconds
> hbase(main):006:0> scan 'emp' ROW
> COLUMN+CELL 1 column=personal
> data:name, timestamp=1512478562674, value=raju 1 row(s) in 0.0280
> seconds
Now I have establish a connection between HBase and pySparkusing shc. Can you please help me with the code to read the aboveHBase table as a dataframe in PySpark.
Version Details:
Spark Version 2.2.0, HBase 1.3.1, HCatalog 2.3.1
you can try like this
pyspark --master local --packages com.hortonworks:shc-core:1.1.1-1.6-s_2.10 --repositories http://repo.hortonworks.com/content/groups/public/ --files /etc/hbase/conf.cloudera.hbase/hbase-site.xml
empdata = ''.join("""
{
'table': {
'namespace': 'default',
'name': 'emp'
},
'rowkey': 'key',
'columns': {
'emp_id': {'cf': 'rowkey', 'col': 'key', 'type': 'string'},
'emp_name': {'cf': 'personal data', 'col': 'name', 'type': 'string'}
}
}
""".split())
df = sqlContext \
.read \
.options(catalog=empdata) \
.format('org.apache.spark.sql.execution.datasources.hbase') \
.load()
df.show()
[Refer this blog for more info]
https://diogoalexandrefranco.github.io/interacting-with-hbase-from-pyspark/

Resources