We are getting duplicated values while querying data from the parquet file using PySpark.
While getting correct data after querying from presto.
Spark Version: 3.1
Configuration setup so far:
from scbuilder.kubernetes import Kubernetes
kobj = Kubernetes(kubernetes = True)
kobj.setExecutorCores(5)
kobj.setExecutorMemory("5g")
kobj.addAdditionalConf("spark.driver.memory", "8g")
kobj.setNumberOfExecutor(2)
sc = kobj.buildSparkSession()
sc.getActiveSession()
sc.conf.set('sc.hadoopConfiguration.setClass',"mapreduce.input.pathFilter.class")
sc.conf.set("hive.convertMetastoreParquet",False)
sc.conf.set("hive.input.format","org.apache.hadoop.hive.ql.io.HiveInputFormat")
Actual data count: 17722
After querying from parquet file: 1036320
Need help to understand why parquet file is showing such behavior and how we can fix it?
Related
I setup a standalone spark and a standalone HDFS.
I installed pyspark and was able to create spark session.
I uploaded one parquet file to HDFS under /data : hdfs://localhost:9000/data
I tried to create a dataframe out of this directory using PySpark
from pyspark.sql import SparkSession
spark = SparkSession.builder.master('local[*]').appName("test").getOrCreate()
df = spark.read.parquet("hdfs://localhost:9000/data").withColumnRenamed("Wafer ID", "Wafer_ID")
I am getting invalid column name even with withColumnRenamed.
I tried with the following code but I got same error for this as well
df = spark.read.parquet("hdfs://localhost:9000/data").select(col("Wafer ID").alias("Wafer_ID"))
I have means to change the column names manually (pandas) or use different file entirely but I want to know if there is a way to solve this problem.
What am I doing wrong?
Iam new to AWs glue.
I am facing issue in converting glue data frame to pyspark data frame :
Below is the crawler configuration i created for reading csv file
glue_cityMapDB="csvDb"
glue_cityMapTbl="csv table"
datasource2 = glue_context.create_dynamic_frame.from_catalog(database = glue_cityMapDB, table_name = glue_cityMapTbl, transformation_ctx = "datasource2")
datasource2.show()
print("Show the data source2 city DF")
cityDF=datasource2.toDF()
cityDF.show()
Output:
Here i am getting output from the glue dydf - #datasource2.show()
But after converting to the pyspark DF, iam getting following error
S3NativeFileSystem (S3NativeFileSystem.java:open(1208)) - Opening 's3://s3source/read/names.csv' for reading 2020-04-24 05:08:39,789 ERROR [Executor task launch worker for task
Appreciate if anybody can help on this?
Make use of a file are of UTF-8 encoded. You can check using file or convert using inconv or any other text editor like sublime.
You can also read the files as a dataframe using:
df = spark.read.csv('s3://s3source/read/names.csv')
then convert to dynamic frames using fromDF()
I am having issues loading multiple files into a dataframe in Databricks. When I load a parquet file in an individual folder, it is fine, but the following error returns when I try to load multiple files in the dataframe:
DF = spark.read.parquet('S3 path/')
"org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually."
Per other StackOverflow answers, I added spark.sql.files.ignoreCorruptFiles true to the cluster configuration but it didn't seem to resolve the issue. Any other ideas?
I'm running a Spark Notebook to save a DataFrame as a Parquet File in the Bluemix Object Storage.
I want to overwrite the Parquet File, when rerunning the Notebook. But actually it's just appending the data.
Below a sample of the iPython Code:
df = sqlContext.sql("SELECT * FROM table")
df.write.parquet("swift://my-container.spark/simdata.parquet", mode="overwrite")
I'm not the python guy,but SaveMode work for dataframe like this
df.write.mode(SaveMode.Overwrite).parquet("swift://my-container.spark/simdata.parquet")
I think the blockstorage replace only the 'simdata.parquet' the 'PART-0000*' remains cuz was 'simdata.parquet' with the 'UUID' of app-id, when you try to read, the DF read all files with the 'simdata.parquet*'
I have a large dataset stored into a BigQuery table and I would like to load it into a pypark RDD for ETL data processing.
I realized that BigQuery supports the Hadoop Input / Output format
https://cloud.google.com/hadoop/writing-with-bigquery-connector
and pyspark should be able to use this interface in order to create an RDD by using the method "newAPIHadoopRDD".
http://spark.apache.org/docs/latest/api/python/pyspark.html
Unfortunately, the documentation on both ends seems scarce and goes beyond my knowledge of Hadoop/Spark/BigQuery. Is there anybody who has figured out how to do this?
Google now has an example on how to use the BigQuery connector with Spark.
There does seem to be a problem using the GsonBigQueryInputFormat, but I got a simple Shakespeare word counting example working
import json
import pyspark
sc = pyspark.SparkContext()
hadoopConf=sc._jsc.hadoopConfiguration()
hadoopConf.get("fs.gs.system.bucket")
conf = {"mapred.bq.project.id": "<project_id>", "mapred.bq.gcs.bucket": "<bucket>", "mapred.bq.input.project.id": "publicdata", "mapred.bq.input.dataset.id":"samples", "mapred.bq.input.table.id": "shakespeare" }
tableData = sc.newAPIHadoopRDD("com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat", "org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject", conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"], int(x["word_count"]))).reduceByKey(lambda x,y: x+y)
print tableData.take(10)