How to access Cassandra Columns with PIG - cassandra

I have a simple column family in my Cassandra KeySpace. I need to access it using PIG. Please help me understand how this works:
SD = LOAD 'cassandra://SampleData/Queries' USING CassandraStorage() as (f1,f2,f3);
If I perform
X = foreach SD generate f1; dump X;
it gives me all the keys stored in the `Queries'' table. I need to be able to generate a couple(key, value)' where key' is a row key andvalue' is the value of a column by this key with a specific name `UpdateTimeStamp'
I figured out that if I do
Y = foreach SD generate f2.name; dump Y;
It goes through the list of all rows and prints the list of NAMEs of columns present in each row
if I do
Z = foreach SD generate f2.value; dump Z;
it gives me the same thing as above except instead of column names, it prints column values.
I need to be able to generate a relation (key, timestamp) something like this:
T = foreach SD generate (f1, f2.value(for f2.name == 'UpdateTimeStamp'));
Obviously, PIG won't take the statement above;

The columns in cassandra are loaded in the PIG as a inner bag of tuples.
Try this
data = LOAD 'cassandra://SampleData/Queries' USING CassandraStorage()
AS (keycolumn, columns: bag {T: tuple(columnname, columnvalue)});
dump data; // check for what is in the data alias
data2 = FOREACH data GENERATE keycolumn, columns.name);
dump data2;

Related

How to make my identity column consecutive on delta table in Azure Databricks?

I am trying to create a delta table with a consecutive identity column. The goal is for our clients to see if there is some data they did not receive from us.
It looks like the generated identity column is not consecutive. Which makes the "INCREMENT BY 1" quite misleading.
store_visitor_type_name = ["apple","peach","banana","mango","ananas"]
card_type_name = ["door","desk","light","coach","sink"]
store_visitor_type_desc = ["monday","tuesday","wednesday","thursday","friday"]
colnames = ["column2","column3","column4"]
data_frame = spark.createDataFrame(zip(store_visitor_type_name,card_type_name,store_visitor_type_desc),colnames)
data_frame.createOrReplaceTempView('vw_increment')
data_frame.display()
%sql
CREATE or REPLACE TABLE TEST(
`column1SK` BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1)
,`column2` STRING
,`column3` STRING
,`column4` STRING
,`inserted_timestamp` TIMESTAMP
,`modified_timestamp` TIMESTAMP
)
USING delta
LOCATION '/mnt/Marketing/Sales';
MERGE INTO TEST as target
USING vw_increment as source
ON target.`column2` = source.`column2`
WHEN MATCHED
AND (target.`column3` <> source.`column3`
OR target.`column4` <> source.`column4`)
THEN
UPDATE SET
`column2` = source.`column2`
,`modified_timestamp` = current_timestamp()
WHEN NOT MATCHED THEN
INSERT (
`column2`
,`column3`
,`column4`
,`modified_timestamp`
,`inserted_timestamp`
) VALUES (
source.`column2`
,source.`column3`
,source.`column4`
,current_timestamp()
,current_timestamp()
)
I'm getting the following results. You can see this is not sequential.What is also very confusing is that it is not starting at 1, while explicitely mentionned in the query.
I can see in the documentation (https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-create-table-using.html#parameters) :
The automatically assigned values start with start and increment by
step. Assigned values are unique but are not guaranteed to be
contiguous. Both parameters are optional, and the default value is 1.
step cannot be 0.
Is there a workaround to make this identity column consecutive ?
I guess I could have another column and do a ROW_NUMBER operation after the MERGE, but it looks expensive.
You can utilize Pyspark to achieve the requirement instead of using row_number() function.
I have read the TEST table as a spark dataframe and converted it to pandas on spark dataframe. In pandas dataframe, using reset_index(), I have created a new index column.
Then I have converted it back to spark dataframe. I have added 1 to the index column values since the index starts with 0.
df = spark.sql("select * from test")
pdf = df.to_pandas_on_spark()
#to create new index column.
pdf.reset_index(inplace=True)
final_df = pdf.to_spark()
#Since index starts from 0, I have added 1 to it.
final_df.withColumn('index',final_df['index']+1).show()

Spark - Performing union of Dataframes inside a for loop starting from empty DataFrame

I have a Dataframe with a column called "generationId" and other fields. Field "generationId" takes a range of integer values from 1 to N (upper bound to N is known and is small, between 10 and 15) and I want to process the DataFrame in the following way (pseudo code):
results = emptyDataFrame <=== how do I do this ?
for (i <- 0 until getN(df)) {
val input = df.filter($"generationId" === i)
results.union(getModel(i).transform(input))
}
Here getN(df) gives the N for that data frame based on some criteria. In the loop, input is filtered based on matching against "i" and then fed to some model (some internal library) which transforms the input by adding 3 more columns to it.
Ultimately I would like to get union of all those transformed data frames, so I have all columns of the original data frame plus the 3 additional columns added by the model for each row. I am not able to figure out how to initialize results and unionize the results in each iteration. I do know the exact schema of the result ahead of time. So I did
val newSchema = ...
but I am not sure how to pass that to emptyRDD function and build a empty Dataframe and use it inside the loop.
Also, if there is a much efficient way to do this inside map operation, please suggest.
you can do something like this:
(0 until getN(df))
.map(i => {
val input = df.filter($"generationId" === i)
getModel(i).transform(input)
})
.reduce(_ union _)
that way you don't need to worry about the empty df

My Pig Script is creating empty files into HDFS

I've this statement:
--Insert a new column based on filename
Data = LOAD '/user/cloudera/Source_Data' using PigStorage('\t','-tagFile');
Data_Schema = FOREACH Data GENERATE
(chararray)$1 AS Date,
(chararray)$2 AS ID,
(chararray)$3 AS Interval,
(chararray)$4 AS Code,
(chararray)$5 AS S_In,
(chararray)$6 AS S_Out,
(chararray)$7 AS C_In,
(chararray)$8 AS C_Out,
(chararray)$9 AS Traffic;
--Split into different directories
SPLIT Data_Schema INTO Src1 IF (Date == '2016-06-25.txt'),
Src2 IF (Date == '2014-07-31.txt'),
Src3 IF (Date == '2016-01-01.txt');
STORE Src1 INTO '/user/cloudera/Source_DatA/2016-06-25' using PigStorage('\t');
STORE Src2 INTO '/user/cloudera/Source_Data/2014-07-31.txt' using PigStorage('\t');
STORE Src2 INTO '/user/cloudera/Source_Data/2016-01-01' using PigStorage('\t');
And there is a example of my orignally source data:
10000 1388530800000 39 8.600870350350515 13.86183926855984 1.7218329193014124 3.424444103320796 25.972920214509095
But when I execute it runs successfully, however the files in HDFS are without data...
Note that I add a new column based on filename. That's why I've one more column in Foreach Statment...
If your input files named 2016-06-25.txt, 2014-07-31.txt and 2016-01-01.txt then the new added column will be referenced by $0 and it will contain files names.
You have to do like this :
Data_Schema = FOREACH Data GENERATE
(chararray)$0 AS Date,
(chararray)$1 AS ID,
...
Or simply specify schema while loading files and keep the rest as it is:
Data = LOAD '/user/cloudera/Source_Data' using PigStorage('\t','-tagFile') as (Date:chararray, ID:chararray, ... ;

Spark DataFrame created from JavaRDD<Row> copies all columns data into first column

I have a DataFrame which I need to convert into JavaRDD<Row> and back to DataFrame I have the following code
DataFrame sourceFrame = hiveContext.read().format("orc").load("/path/to/orc/file");
//I do order by in above sourceFrame and then I convert it into JavaRDD
JavaRDD<Row> modifiedRDD = sourceFrame.toJavaRDD().map(new Function<Row,Row>({
public Row call(Row row) throws Exception {
if(row != null) {
//updated row by creating new Row
return RowFactory.create(updateRow);
}
return null;
});
//now I convert above JavaRDD<Row> into DataFrame using the following
DataFrame modifiedFrame = sqlContext.createDataFrame(modifiedRDD,schema);
sourceFrame and modifiedFrame schema is same when I call sourceFrame.show() output is expected I see every column has corresponding values and no column is empty but when I call modifiedFrame.show() I see all the columns values gets merged into first column value for e.g. assume source DataFrame has 3 column as shown below
_col1 _col2 _col3
ABC 10 DEF
GHI 20 JKL
When I print modifiedFrame which I converted from JavaRDD it shows in the following order
_col1 _col2 _col3
ABC,10,DEF
GHI,20,JKL
As shown above all the _col1 has all the values and _col2 and _col3 is empty. I don't know what is wrong.
As I mentioned in question's comment ;
It might occurs because of giving list as a one parameter.
return RowFactory.create(updateRow);
When investigated Apache Spark docs and source codes ; In that specifying schema example They assign parameters one by one for all columns respectively. Just investigate the some source code roughly RowFactory.java class and GenericRow class doesn't allocate that one parameter. So Try to give parameters respectively for row's column's.
return RowFactory.create(updateRow.get(0),updateRow.get(1),updateRow.get(2)); // List Example
You may try to convert your list to array and then pass as a parameter.
YourObject[] updatedRowArray= new YourObject[updateRow.size()];
updateRow.toArray(updatedRowArray);
return RowFactory.create(updatedRowArray);
By the way RowFactory.create() method is creating Row objects. In Apache Spark documentation about Row object and RowFactory.create() method;
Represents one row of output from a relational operator. Allows both generic access by ordinal, which will incur boxing overhead for
primitives, as well as native primitive access. It is invalid to use
the native primitive interface to retrieve a value that is null,
instead a user must check isNullAt before attempting to retrieve a
value that might be null.
To create a new Row, use RowFactory.create() in Java or Row.apply() in
Scala.
A Row object can be constructed by providing field values. Example:
import org.apache.spark.sql._
// Create a Row from values.
Row(value1, value2, value3, ...)
// Create a Row from a Seq of values.
Row.fromSeq(Seq(value1, value2, ...))
According to documentation; You can also apply your own required algorithm to seperate rows columns while creating Row objects respectively. But i think converting list to array and pass parameter as an array will work for you(I couldn't try please post your feedbacks, thanks).

What is the most effective way to get elements of RDD in spark

I need to get values of two columns of a dataframe converted to RDD.
The first solution I have thought is that
First convert the RDD to List of Row RDD.collect()
then for each element of List, get values by using Row[i].getInt(column_index)
this solution works fine with small and medium size of data. But in large one, I got over memory.
My temporary solution is that I only create newRDD which contains only two Columns instead all columns. And then, apply my solution above, this may reduce most of needed memory.
Current implementation is like this:
Row[] rows = sparkDataFrame.collect();
for (int i = 0; i < rows.length; i++) { //about 50 million rows
int yTrue = rows[i].getInt(0);
int yPredict = rows[i].getInt(1);
}
Could you help me to improve my solution, or suggest me other solutions!
Thanks!
ps: I'm a new spark's user!
First you convert your big RDD into Dataframe and than directly you can select whatever columns you require.
// Create the DataFrame
DataFrame df = sqlContext.jsonFile("examples/src/main/resources/people.json");
// Select only the "name" column
df.select(df.col("name"), df.col("age")).show();
For more detail you can follow this link

Resources