How to write a spark dataframe to S3 with MD5 header? - apache-spark

I have a Spark DataFrame that I need to write to an S3 Object Lock enabled bucket.
A simple write results in this error
df.write.parquet(output_path)
com.amazonaws.services.s3.model.AmazonS3Exception:
Content-MD5 HTTP header is required for Put Object requests with Object Lock parameters
Any ideas how can I solve this?
There are ways to work this with boto3 type uploads, but how to do it with spark.df.write
s3_client.put_object(
Bucket=<S3_BUCKET_NAME>,
Key=<KEY>,
Body=open(<FILE_NAME>, "rb"),
ContentMD5=<MD5_HASH>
)

not supported in hadoop's s3a connector I'm afraid.
As is usual in OSS projects, your participation will help there: https://issues.apache.org/jira/browse/HADOOP-15224. That JIRA was created in 2018 and hasn't had attention as nobody new it was actually needed. We could maybe revisit that.
Don't know about the EMR s3 connector.

Related

Spark event log not able to write to s3

I am trying to write the eventlog of my spark application to s3 for consuming through the history server later , but i get below warning message in the log
WARN S3ABlockOutputStream: Application invoked the Syncable API against stream writing to /spark_logs/eventlog_v2_local-1671627766466/events_1_local-1671627766466. This is unsupported
Below is the spark config I used:
config("spark.eventLog.enabled", "true")\
.config("spark.eventLog.dir", 's3a://change-data-capture-cdc-test/pontus_data_load/spark_logs')\
.config("spark.eventLog.rolling.enabled", "true")\
.config("spark.eventLog.rolling.maxFileSize", "10m")
spark version - 3.3.1
dependant jars:
org.apache.hadoop:hadoop-aws:3.3.0
com.amazonaws:aws-java-sdk-bundle:1.11.901
Only the appstatus_local-1671627766466.inprogress file is created, the actual log file is not created. But with my local file system its working as expected.
the warning means "the Application invoked the Syncable API against stream writing to /spark_logs/eventlog_v2_local-1671627766466/events_1_local-1671627766466. This is unsupported"
application code persists data to a filesystem using sync() to flush and save. clearly the spark logging is calling this. And as noted, the s3a client says "no can do".
s3 is not a filesystem. it is an object store; objects are written in single atomic operations. If you look at the S3ABlockOutputStream class -it is all open source after all- you can see that it may upload data, but it only completes the write in close().
therefore, it is not visible during the logging process itself. The warning is to make clear this is happening. It will appear once the log is closed.
If you want, you can set spark.hadoop.fs.s3a.downgrade.syncable.exceptions to true and it will raise an exception instead. That really makes clear to applications like hbase that the filesystem lacks the semantics it needs.

fs.s3 configuration with two s3 account with EMR

I have pipeline using lambda and EMR, where I read csv from one s3 account A and write parquet to another s3 in account B.
I created EMR in account B and has access to s3 in account B.
I cannot add account A s3 bucket access in EMR_EC2_DefaultRole(as this account is enterprise wide data storage), so i use accessKey, secret key to access account A s3 bucket.This is done through congnito token.
METHOD1
I am using fs.s3 protocol to read csv from s3 from account A and writing to s3 on account B.
I have pyspark code which reads from s3 (A) and write to parquet s3 (B) I submit job 100 of jobs at time.This pyspark code runs in EMR.
Reading using following setting
hadoop_config = sc._jsc.hadoopConfiguration()
hadoop_config.set("fs.s3.awsAccessKeyId", dl_access_key)
hadoop_config.set("fs.s3.awsSecretAccessKey", dl_secret_key)
hadoop_config.set("fs.s3.awsSessionToken", dl_session_key)
spark_df_csv = spark_session.read.option("Header", "True").csv("s3://somepath")
Writing:
I am using s3a protocol s3a://some_bucket/
It works but sometimes i see
_temporary folder present in s3 bucket and not all csv converted to parquet
When i enable EMR concurrency to 256 (EMR-5.28) and submit 100 jobs it this i get _temporary rename error.
Issues:
This method creates temporary folder and sometimes it doesn't deletes it.I can see _temporary folder in s3 bucket.
when i enable EMR concurrency (EMR latest versin5.28) it allows to run steps in parallel, i get rename _temporary error for some of the files.
METHOD2:
I feel s3a is not good for parallel job.
So i want to read and write using fs.s3 as it has better file commiters.
So i did this initially i set hadoop configuration as above to account A and then unset the configuration, so that it can access default account B eventually s3 bucket.
In this way
hadoop_config = sc._jsc.hadoopConfiguration()
hadoop_config.unset("fs.s3.awsAccessKeyId")
hadoop_config.unset("fs.s3.awsSecretAccessKey")
hadoop_config.unset("fs.s3.awsSessionToken")
spark_df_csv.repartition(1).write.partitionBy(['org_id', 'institution_id']). \
mode('append').parquet(write_path)
Issues:
This works but the issue is let say if i trigger lambda which in turn submit job for 100 files (in loop) some 10 odd files result in access denied while writing file to s3 bucket.
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n ... 1 more\nCaused by: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service:
This could be because of either this unset is not working sometimes or
because of parallel run Spark context/session set unset happening in paralleling? I mean spark context for one job is unsettling the hadoop configuration and other is setting it, which may cause this issue, though not sure how spark context works in parallel.
Isn't each job has separate Spark context and session.
Please suggest alternatives for my situation.

Read CSV file from AWS S3

I have an EC2 instance running pyspark and I'm able to connect to it (ssh) and run interactive code within a Jupyter Notebook.
I have a S3 bucket with a csv file that I want to read, when I attempt to read it with:
spark = SparkSession.builder.appName('Basics').getOrCreate()
df = spark.read.csv('https://s3.us-east-2.amazonaws.com/bucketname/filename.csv')
Which throws a long Python error message and then something related to:
Py4JJavaError: An error occurred while calling o131.csv.
Specify S3 path along with access key and secret key as following:
's3n://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>#my.bucket/folder/input_data.csv'
Access key-related information can be introduced in the typical username + password manner for URLs. As a rule, the access protocol should be s3a, the successor to s3n (see Technically what is the difference between s3n, s3a and s3?). Putting this together, you get
spark.read.csv("s3a://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>#bucketname/filename.csv")
As an aside, some Spark execution environments, e.g., Databricks, allow S3 buckets to be mounted as part of the file system. You can do the same when you build a cluster using something like s3fs.

How to add user defined metadata to S3 object via Spark

I am using spark sql dataframe to write to s3 as parquet
Dataset.write
.mode(SaveMode.Overwrite)
.parquet("s3://filepath")
in the spark configuration i have specified following options for SSE and for ACL
spark.sparkContext.hadoopConfiguration.set("fs.s3a.server-side-encryption-algorithm", "AES256")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default","BucketOwnerFullControl")
how add the user define metadata to the s3 object .
Thanks
Saravanan.
I dont think its possible today. You can't add/update the user defined metadata for S3 objects from EMR. This is from my limited knowledge. Again, AWS Support is the best source to get this answered but I dont believe the API is exposed to allow users to add/update the user defined metadata yet from EMR
It's a very good question. Unfortunately, Spark does not permit it, since Spark is FileSystem agnostic (it doesnt make difference between local, HDFS, or S3).
In my mind, since all file system support kind of metadata, Spark should propose something as well...
But, as a workaround, you can always change meta-data after file have been uploaded.
Example with Java, make file public:
Gradle:
[group: 'org.apache.hadoop', name: 'hadoop-aws', version: '2.8.0'],
(provides: [group: 'com.amazonaws', name: 'aws-java-sdk-s3', version: '1.10.60'],
[group: 'com.amazonaws', name: 'aws-java-sdk-core', version: '1.10.60'])
// Write data to S3
df
.write()
.save("s3a://BUCKET/test_json");
// (this generate test_json/_SUCCESS + test_json/part-XXX keys)
// Create S3 client
private final AmazonS3 conn;
AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
this.conn = new AmazonS3Client(credentials, clientConfig);
this.conn.setEndpoint(S3_ENDPOINT);
this.conn.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
// Get Spark generated folder files
this.conn.listObjects(BUCKET, "test_json").getObjectSummaries().forEach(obj -> {
this.conn.setObjectAcl(BUCKET, obj.getKey(), CannedAccessControlList.PublicRead);
});

spark read partitioned data in S3 partly in glacier

I have a dataset in parquet in S3 partitioned by date (dt) with oldest date stored in AWS Glacier to save some money. For instance, we have...
s3://my-bucket/my-dataset/dt=2017-07-01/ [in glacier]
...
s3://my-bucket/my-dataset/dt=2017-07-09/ [in glacier]
s3://my-bucket/my-dataset/dt=2017-07-10/ [not in glacier]
...
s3://my-bucket/my-dataset/dt=2017-07-24/ [not in glacier]
I want to read this dataset, but only the a subset of date that are not yet in glacier, eg:
val from = "2017-07-15"
val to = "2017-08-24"
val path = "s3://my-bucket/my-dataset/"
val X = spark.read.parquet(path).where(col("dt").between(from, to))
Unfortunately, I have the exception
java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: The operation is not valid for the object's storage class (Service: Amazon S3; Status Code: 403; Error Code: InvalidObjectState; Request ID: C444D508B6042138)
I seems that spark does not like partitioned dataset when some partitions are in Glacier. I could always read specifically each date, add the column with current date and reduce(_ union _) at the end, but it is ugly like hell and it should not be necessary.
Is there any tip to read available data in the datastore even with old data in glacier?
Error you are getting not related to Apache spark , you are getting exception because of Glacier service in short S3 objects in the Glacier storage class are not accessible in the same way as normal objects, they need to be retrieved from Glacier before they can be read.
Apache Spark cannot handle directly glacier storage TABLE/PARTITION mapped to an S3 .
java.io.IOException:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
The operation is not valid for the object's storage class (Service:
Amazon S3; Status Code: 403; Error Code: InvalidObjectState; Request
ID: C444D508B6042138)
When S3 moves any objects from S3 storage classes
STANDARD,
STANDARD_IA,
REDUCED_REDUNDANCY
to GLACIER storage class, you have object S3 has stored in Glacier which is not visible
to you and S3 will bill only Glacier storage rates.
It is still an S3 object, but has the GLACIER storage class.
When you need to access one of these objects, you initiate a restore,
which temporary copy into S3 .
Move data into S3 bucket read into Apache Spark will resolve your issue.
https://aws.amazon.com/s3/storage-classes/
Note : Apache Spark , AWS athena etc cannot read object directly from glacier if you try will get 403 error.
If you archive objects using the Glacier storage option, you must
inspect the storage class of an object before you attempt to retrieve
it. The customary GET request will work as expected if the object is
stored in S3 Standard or Reduced Redundancy (RRS) storage. It will
fail (with a 403 error) if the object is archived in Glacier. In this
case, you must use the RESTORE operation (described below) to make
your data available in S3.
https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/
403 error is due to the fact you can not read object that is archieve in Glacier, source
Reading Files from Glacier
If you want to read files from Glacier, you need to restore them to s3 before using them in Apache Spark, a copy will be available on s3 for the time mentioned during restore command, for details see here, you can use S3 console, cli or any language to do that too
Discarding some Glacier files that you do not want to restore
Let's say you do not want to restore all the files from Glacier and discard them during processing, from Spark 2.1.1, 2.2.0 you can ignore those files (with IO/Runtime Exception), by setting spark.sql.files.ignoreCorruptFiles to true source
If you define your table through Hive, and use the Hive metastore catalog to query it, it won't try to go onto the non selected partitions.
Take a look at the spark.sql.hive.metastorePartitionPruning setting
try this setting:
ss.sql("set spark.sql.hive.caseSensitiveInferenceMode=NEVER_INFER")
or
add the spark-defaults.conf config:
spark.sql.hive.caseSensitiveInferenceMode NEVER_INFER
The S3 connectors from Amazon (s3://) and the ASF (s3a://) don't work with Glacier. Certainly nobody tests s3a against glacier. and if there were problems, you'd be left to fix them yourself. Just copy the data into s3 or onto local HDFS and then work with it there

Resources