Cassandra COPY command to Update records - cassandra

I want to update certain columns in a table using Cassandra COPY feature. But Copy inserts new record even when row is not found. I want to restrict in COPY command, only when PRIMARY KEY row is found the column in csv file to be updated. Sample table and COPY command are shared below.
CREATE TABLE Orders(
Ord_Id Text Primary Key,
Ord_Date Int,
Ord_Acct Text,
Ord_Comp_Dt Int,
Ord_Status Text)
Sample Data:
Ord_Id | Ord_Date | Ord_Acct | Ord_Comp_Dt | Ord_Status
ORD001 | 20170602 | A001 | 20170615 | InProgress
ORD002 | 20170603 | A002 | 20170607 | Failed
ORD003 | 20170604 | A003 | 20170616 | InProgress
ORD004 | 20170605 | A003 | 20170617 | InProgress
Above table gets row entry when order is placed with Initial Ord_Status='InProgress'. Based on order completion network provides the data with Ord_Id, Ord_Status.
Network Data
ORD_ID,ORD_STATUS
ORD001,Failed
ORD003,Success
ORD004,Rejected
ORD005,DataIncomplete
Copy command is provided below
COPY ord_schema.Orders(Ord_Id,Ord_Status) FROM 'NW170610.csv'
Table SnapShot after executing COPY command
Sample Data:
Ord_Id | Ord_Date | Ord_Acct | Ord_Comp_Dt | Ord_Status
ORD001 | 20170602 | A001 | 20170615 | Failed
ORD002 | 20170603 | A002 | 20170607 | Failed
ORD003 | 20170604 | A003 | 20170616 | InProgress
ORD004 | 20170605 | A003 | 20170617 | Rejected
ORD005 | Null | Null | Null | DataIncomplete
ORD005 should not be inserted when Primary Key is not found.
Kindly assist is there any way to check data exists before insert or prevent entry when data does not exist.

Cassandra does an UPSERT. Which means, it will insert a column if there is none (based on primary key).
What I'd suggest is add another column maybe Ord_Acct (something that can bring uniqueness to the data) as a clustering/composite key. Now, if the Ord_Acct is null it won't do an insert. So, to summarize I'd suggest change the data model that meets your requirements.

It is not possible to check existence of row before inserting with copy command
Copy command only parse the csv and insert directly. So you have to write your own code to read the csv NW170610 and for each record check existance with select query, If exist then insert.
Or Dump csv of Orders table using copy to command
COPY orders (ord_id) TO 'orders_id.csv';
Now for each record of NW170610 check that the id present in the orders_id.csv, if yes then write the record to another file complete_order.csv.
Now just load the complete_order.csv file using copy from command

Related

Fixed Length file Reading Spark with multiple Records format in one

All,
I am trying to read the file with multiple record types in spark, but have no clue how to do it.. Can someone point out, if there is a way to do it? or some existing packages? or some user git packages
the example below - where we have a text file with 2 separate ( it could be more than 2 ) record type :
00X - record_ind | First_name| Last_name
0-3 record_ind
4-10 firstname
11-16 lastname
============================
00Y - record_ind | Account_#| STATE | country
0-3 record_ind
4-8 Account #
9-10 STATE
11-15 country
input.txt
------------
00XAtun Varma
00Y00235ILUSA
00XDivya Reddy
00Y00234FLCANDA
sample output/data frame
output.txt
record_ind | x_First_name | x_Last_name | y_Account | y_STATE | y_country
---------------------------------------------------------------------------
00x | Atun | Varma | null | null | null
00y | null | null | 00235 | IL | USA
00x | Divya | Reddy | null | null | null
00y | null | null | 00234 | FL | CANDA
One way to achieve this is to load data as 'text'. Complete row will be loaded inside one column named 'value'. Now call a UDF which modifies each row based on condition and transform the data in way that all row follow same schema.
At last, use schema to create required dataframe and save in database.

How to add rows to an existing partition in Spark?

I have to update historical data. By update, I mean adding new rows and sometimes new columns to an existing partition on S3.
The current partitioning is implemented by date: created_year={}/created_month={}/created_day={}. In order to avoid too many objects per partition, I do the following to maintain single object/partition:
def save_repartitioned_dataframe(bucket_name, df):
dest_path = form_path_string(bucket_name, repartitioned_data=True)
print('Trying to save repartitioned data at: {}'.format(dest_path))
df.repartition(1, "created_year", "created_month", "created_day").write.partitionBy(
"created_year", "created_month", "created_day").parquet(dest_path)
print('Data repartitioning complete with at the following location: ')
print(dest_path)
_, count, distinct_count, num_partitions = read_dataframe_from_bucket(bucket_name, repartitioned_data=True)
return count, distinct_count, num_partitions
A scenario exists where I have to add certain rows that have these columnar values:
created_year | created_month | created_day
2019 |10 |27
This means that the file(S3 object) at this path: created_year=2019/created_month=10/created_day=27/some_random_name.parquet will be appended with the new rows.
If there is a change in the schema, then all the objects will have to implement that change.
I tried looking into how this works generally, so, there are two modes of interest: overwrite, append.
The first one will just add the current data and delete the rest. I do not want that situation. The second one will append but may end up creating more objects. I do not want that situation either. I also read that dataframes are immutable in Spark.
So, how do I achieve appending the new data as it arrives to existing partitions and maintaining one object per day?
Based on your question I understand that you need to add new rows to the existing data while not increasing the number of parquet files. This can be achieved by doing operations on specific partition folders. There might be three cases while doing this.
1) New partition
This means the incoming data has a new value in the partition columns. In your case, this can be like:
Existing data
| year | month | day |
| ---- | ----- | --- |
| 2020 | 1 | 1 |
New data
| year | month | day |
| ---- | ----- | --- |
| 2020 | 1 | 2 |
So, in this case, you can just create a new partition folder for the incoming data and save it as you did.
partition_path = "/path/to/data/year=2020/month=1/day=2"
new_data.repartition(1, "year", "month", "day").write.parquet(partition_path)
2) Existing partition, new data
This is where you want to append new rows to the existing data. It could be like:
Existing data
| year | month | day | key | value |
| ---- | ----- | --- | --- | ----- |
| 2020 | 1 | 1 | a | 1 |
New data
| year | month | day | key | value |
| ---- | ----- | --- | --- | ----- |
| 2020 | 1 | 1 | b | 1 |
Here we have a new record for the same partition. You can use the "append mode" but you want a single parquet file in each partition folder. That's why you should read the existing partition first, union it with the new data, then write it back.
partition_path = "/path/to/data/year=2020/month=1/day=1"
old_data = spark.read.parquet(partition_path)
write_data = old_data.unionByName(new_data)
write_data.repartition(1, "year", "month", "day").write.parquet(partition_path)
3) Existing partition, existing data
What if the incoming data is an UPDATE, rather than an INSERT? In this case, you should update a row instead of inserting a new one. Imagine this:
Existing data
| year | month | day | key | value |
| ---- | ----- | --- | --- | ----- |
| 2020 | 1 | 1 | a | 1 |
New data
| year | month | day | key | value |
| ---- | ----- | --- | --- | ----- |
| 2020 | 1 | 1 | a | 2 |
"a" had a value of 1 before, now we want it to be 2. So, in this case, you should read existing data and update existing records. This could be achieved like the following.
partition_path = "/path/to/data/year=2020/month=1/day=1"
old_data = spark.read.parquet(partition_path)
write_data = old_data.join(new_data, ["year", "month", "day", "key"], "outer")
write_data = write_data.select(
"year", "month", "day", "key",
F.coalesce(new_data["value"], old_data["value"]).alias("value")
)
write_data.repartition(1, "year", "month", "day").write.parquet(partition_path)
When we outer join the old data with the new one, there can be four things,
both data have the same value, doesn't matter which one to take
two data have different values, take the new value
old data doesn't have the value, new data has, take the new
new data doesn't have the value, old data has, take the old
To fulfill what we desire here, coalesce from pyspark.sql.functions will do the work.
Note that this solution covers the second case as well.
About schema change
Spark supports schema merging for the parquet file format. This means you can add columns to or remove from your data. As you add or remove columns, you will realize that some columns are not present while reading the data from the top level. This is because Spark disables schema merging by default. From the documentation:
Like Protocol Buffer, Avro, and Thrift, Parquet also supports schema evolution. Users can start with a simple schema, and gradually add more columns to the schema as needed. In this way, users may end up with multiple Parquet files with different but mutually compatible schemas. The Parquet data source is now able to automatically detect this case and merge schemas of all these files.
To able to read all columns, you need to set the mergeSchema option to true.
df = spark.read.option("mergeSchema", "true").parquet(path)

How to insert a data frame into Cassandra in Scala

I have a data frame with as shown below and want to insert this data into cassandra table
+---------+------+-----------+
| name | id | city |
+---------+------+-----------+
| sam | 123 | Atlanta |
| John | 456 | Texas |
+---------+------+-----------+
I am using below code but it inserts only last row.
df.write.format("org.apache.spark.sql.cassandra")
.options(Map("table" -> "tablename", "keyspace" -> "keyspace"))
.mode(saveMode = "Append").save(`)
How to insert a data frame into Cassandra in Scala?
The code you provided should insert all the rows into Cassandra. There are a few reasons that it might not.
That is actually all the data there is, df for some reason only
contains a single row.
There are multiple rows but they share the same Partition key,
this means that subsequent writes will overwrite the initial write.
An exception is being thrown, this should be obvious in the logs.

How to return datetime in Excel when SQL table has only time?

I am currently querying against an Informix SQL table that is datetime, however only holds time, such as "07:30:44". I need to be able to pull this data and have the send_time show what the actual time was, and not "0:00:00".
Example:
Informix Table Data
--------------------------
| send_date | 02/09/2016 | --datetime field
--------------------------
| send_time | 07:30:44 | --datetime field
--------------------------
When I query through Excel via Microsoft Query editor, I can see the correct value, "07:30:44" in the preview, however what is returned in my sheet is "0:00:00". Normally I would change the formatting on the cell, however the literal value for the entire column is "12:00:00 AM".
Excel pulls/displays
--------------------------
| send_date | 02/09/2016 |
--------------------------
| send_time | 12:00:00 AM| --displayed is "0:00:00"
--------------------------
I have cast the field as char to see if it would return correctly, and it does! However you can't use parameters/build quick reports with this method.
Desired Excel output
--------------------------
| send_date | 02/09/2016 |
--------------------------
| send_time | 07:30:44 |
--------------------------
Is there another way to resolve this? I'd love to give the users an easy way to build queries without having to cast anything.

Cassandra: Searching for NULL values

I have a table MACRecord in Cassandra as follows :
CREATE TABLE has.macrecord (
macadd text PRIMARY KEY,
position int,
record int,
rssi1 float,
rssi2 float,
rssi3 float,
rssi4 float,
rssi5 float,
timestamp timestamp
)
I have 5 different nodes each updating a row based on its title i-e node 1 just updates rssi1, node 2 just updates rssi2 etc. This evidently creates null values for other columns.
I cannot seem to be able to a find a query which will give me only those rows which are not null. Specifically i have referred to this post.
I want to be able to query for example like SELECT *FROM MACRecord where RSSI1 != NULL as in MYSQL. However it seems both null values and comparison operators such as != are not supported in CQL.
Is there an alternative to putting NULL values or a special flag?. I am inserting float so unlike strings i cannot insert something like ''. What is a possible workaround for this problem?
Edit :
My data model in MYSQL was like this :
+-----------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+-------------------+-----------------------------+
| MACAdd | varchar(17) | YES | UNI | NULL | |
| Timestamp | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| Record | smallint(6) | YES | | NULL | |
| RSSI1 | decimal(5,2) | YES | | NULL | |
| RSSI2 | decimal(5,2) | YES | | NULL | |
| RSSI3 | decimal(5,2) | YES | | NULL | |
| RSSI4 | decimal(5,2) | YES | | NULL | |
| RSSI5 | decimal(5,2) | YES | | NULL | |
| Position | smallint(6) | YES | | NULL | |
+-----------+--------------+------+-----+-------------------+-----------------------------+
Each node (1-5) was querying from MYSQL based on its number for example node 1 "SELECT *FROM MACRecord WHERE RSSI1 is not NULL"
I updated my data model in cassandra as follows so that rssi1-rssi5 are now VARCHAR types.
CREATE TABLE has.macrecord (
macadd text PRIMARY KEY,
position int,
record int,
rssi1 text,
rssi2 text,
rssi3 text,
rssi4 text,
rssi5 text,
timestamp timestamp
)
I was thinking that each node would initially insert string 'NULL' for a record and when an actual rssi data comes it will just replace the 'NULL' string so it would avoid having tombstones and would more or less appear to the user that the values are actually not valid pieces of data since they are flagged 'NULL'.
However i am still puzzled as to how i will retrieve results like i have done in MYSQL. There is no != operator in cassandra. How can i write a query which will give me a result set for example like "SELECT *FROM HAS.MACRecord where RSSI1 != 'NULL'" .
You can only select rows in CQL based on the PRIMARY KEY fields, which by definition cannot be null. This also applies to secondary indexes. So I don't think Cassandra will be able to do the filtering you want on the data fields. You could select on some other criteria and then write your client to ignore rows that had null values.
Or you could create a different table for each rssiX value, so that none of them would be null.
If you are only interested in some kind of aggregation, then the null values are treated as zero. So you could do something like this:
SELECT sum(rssi1) WHERE macadd='someadd';
The sum() function is available in Cassandra 2.2.
You might also be able to do some kind of trick with a user defined function/aggregate, but I think it would be simpler to have multiple tables.

Resources