Formatting the result of an SQL query into JSON - python-3.x

I have a small database as defined in the code snippet below. I want to query this to get all of the information and send it to an vue app via a JSON file via a Flask API. At the moment the query that I am using is
SELECT tbl_room.room, tbl_room.room_id, tbl_device.name, tbl_display.display, tbl_function.function, tbl_device.format
FROM tbl_device
INNER JOIN tbl_room ON tbl_room.id = tbl_device.room_id
INNER JOIN tbl_display ON tbl_display.id = tbl_device.display_id
INNER JOIN tbl_function ON tbl_function.id = tbl_device.function_id
ORDER BY tbl_room.room_id;
this gives me an output like:
Bedroom (Main) bedroom_main bme280/1 gauge temperature {"min": 0, "max": 50, "dp": 1, "units": "°C"}
Bedroom (Main) bedroom_main bme280/1 gauge humidity {"min": 0, "max": 100, "dp": 1, "units": "%"}
Bedroom (Main) bedroom_main bme280/1 gauge pressure {"min": 0, "max": 1100, "dp": 1, "units": "hPa"}
Front Room front_room ds18b20/heater gauge temperature {"min": 0, "max": 50, "dp": 1, "units": "°C"}
I would like to get it into a JSON file so that it is arranged as:
[
{ "name": "Office",
"id": "office",
"devices": []
},
{ "name": "Front Room",
"id": "front_room",
"devices": []
}
}
]
Can this be done in a single sql query? Or do I have to do a query for each room in a loop? Or is it more efficient to dump the whole dataset out in one query and process it in pyhton afterwards? This is a small dataset but I'm interested to know which is the most efficient method.
Thank you in advance,
Martyn
Here is my table structure:
-- Table: tbl_device
CREATE TABLE tbl_device (
name VARCHAR NOT NULL ON CONFLICT ROLLBACK,
room_id INTEGER CONSTRAINT fk_room REFERENCES tbl_room (id)
NOT NULL,
function_id INTEGER CONSTRAINT fk_function REFERENCES tbl_function (id)
NOT NULL ON CONFLICT ROLLBACK,
display_id INTEGER CONSTRAINT fk_display REFERENCES tbl_display (id)
NOT NULL ON CONFLICT ROLLBACK,
format VARCHAR NOT NULL ON CONFLICT ROLLBACK
DEFAULT [default],
UNIQUE (
name,
room_id,
function_id,
display_id
)
ON CONFLICT ROLLBACK
);
-- Table: tbl_display
CREATE TABLE tbl_display (
id INTEGER PRIMARY KEY AUTOINCREMENT,
display VARCHAR NOT NULL ON CONFLICT ROLLBACK
UNIQUE ON CONFLICT ROLLBACK
);
-- Table: tbl_function
CREATE TABLE tbl_function (
id INTEGER PRIMARY KEY AUTOINCREMENT,
function VARCHAR NOT NULL ON CONFLICT ROLLBACK
UNIQUE ON CONFLICT ROLLBACK,
control BOOLEAN NOT NULL
DEFAULT (0)
);
-- Table: tbl_room
CREATE TABLE tbl_room (
id INTEGER PRIMARY KEY AUTOINCREMENT,
room_id VARCHAR NOT NULL
UNIQUE ON CONFLICT ROLLBACK,
room VARCHAR NOT NULL ON CONFLICT ROLLBACK
);

First, There is no way to directly feed JSON response from MySQL database to VueJS or any other App. VueJS App is the fronted of your application. You have to create a Backend which connects to MySQL database, fetch necessary data from MYSQL Database, Convert them to JSON and send to Vue App.
To Develop a backend, you may use languages such as PHP, Python, Java, NodeJS etc.
If you can continue with PHP, it is very easy to fetch data and convert to JSON.
But If you still need to continue with Python, you have to use Flask or any other python web framework to do that.
Here is the sample php code
<?php
$dbhost = 'hostname';
$dbuser = 'username';
$dbpass = 'password';
$dbname = 'database';
$db = new mysqli($dbhost, $dbuser, $dbpass, $dbname);
if ($db->connect_errno) {
printf("Failed to connect to database");
exit();
}
$result = $db->query("SELECT * FROM "); // Your SQL query
$data = array();
while ( $row = $result->fetch_assoc()) {
$data[]=$row;
}
echo json_encode($data);
?>

If your version of sqlite was compiled with the JSON1 extension, enabled, something like:
SELECT json_group_array(json_object('name', tbl_room.name,
'id', tbl_room.room_id,
'devices', json_array()))
FROM tbl_room
GROUP BY tbl_room.name, tbl_room.room_id;

Related

NodeJs : bulk insert into SQL Server one-to-many

I want to using nodejs mssql package to bulk insert data with below json:
[
{
"name": "Tom",
"registerDate": "2021-10-10 00:00:00",
"gender": 0,
"consumeRecord":[
{
"date": "2021-10-11 00:00:00",
"price": 102.5
},
{
"date": "2021-10-12 00:00:00",
"price": 200
}
]
},
{
"name": "Mary",
"registerDate": "2021-06-10 00:00:00",
"gender": 1,
"consumeRecord":[
{
"date": "2021-07-11 00:00:00",
"price": 702.5
},
{
"date": "2021-12-12 00:00:00",
"price": 98.2
}
]
}
]
I am try to mssql bulk insert for the member record with multiple consume data?
Is there anything can insert one to many with bulk insert like below.
because it seems need to insert the member table and get the id (primary key) first. Then using the id (primary key) for the consume table relation data
const sql = require('mssql')
// member table
const membertable = new sql.Table('Member')
table.columns.add('name', sql.Int, {nullable: false})
table.columns.add('registerDate', sql.VarChar(50), {nullable: false})
table.columns.add('gender', sql.VarChar(50), {nullable: false})
// consume record table
const consumeTable = new sql.Table('ConsumeRecord')
table.columns.add('MemberId', sql.Int, {nullable: false})
table.columns.add('Date', sql.VarChar(50), {nullable: false})
table.columns.add('price', sql.Money, {nullable: false})
// insert into member table
jsonList.forEach(data => {
table.rows.add(data.name)
table.rows.add(data.registerDate)
table.rows.add(data.gender)
consumeTable.rows.add(data.memberId) // <---- should insert member table id
consumeTable.rows.add(data.consumeRecord.data)
consumeTable.rows.add(data.consumeRecord.price)
const request = new sql.Request()
request.bulk(consumeTable , (err, result) => {
})
})
const request = new sql.Request()
request.bulk(membertable , (err, result) => {
})
Expected Record:
Member Table
id (auto increment)
name
registerDate
gender
1
Tom
2021-10-10 00:00:00
0
2
Mary
2021-06-10 00:00:00
1
Consume Record Table
id
MemberId
Date
price
1
1
2021-10-10 00:00:00
102.5
2
1
2021-10-12 00:00:00
200
3
2
2021-07-11 00:00:00
702.5
4
2
2021-12-12 00:00:00
98.2
The best way to do this is to upload the whole thing in batch to SQL Server, and ensure that it inserts the correct foreign key.
You have two options
Option 1
Upload the main table as a Table Valued Parameter or JSON blob
Insert with OUTPUT clause to select the inserted IDs back to the client
Correlate those IDs back to the child table data
Bulk Insert that as well
Option 2 is a bit easier: do the whole thing in SQL
Upload everything as one big JSON blob
Insert main table with OUTPUT clause into table variable
Insert child table, joining the IDs from the table variable
CREATE TABLE Member(
Id int IDENTITY PRIMARY KEY,
name varchar(50),
registerDate datetime NOT NULL,
gender tinyint NOT NULL
);
CREATE TABLE ConsumeRecord(
MemberId Int NOT NULL REFERENCES Member (Id),
Date datetime not null,
price decimal(9,2)
);
Note the more sensible datatypes of the columns
DECLARE #ids TABLE (jsonIndex nvarchar(5) COLLATE Latin1_General_BIN2 not null, memberId int not null);
WITH Source AS (
SELECT
j1.[key],
j2.*
FROM OPENJSON(#json) j1
CROSS APPLY OPENJSON(j1.value)
WITH (
name varchar(50),
registerDate datetime,
gender tinyint
) j2
)
MERGE Member m
USING Source s
ON 1=0 -- never match
WHEN NOT MATCHED THEN
INSERT (name, registerDate, gender)
VALUES (s.name, s.registerDate, s.gender)
OUTPUT s.[key], inserted.ID
INTO #ids(jsonIndex, memberId);
INSERT ConsumeRecord (MemberId, Date, price)
SELECT
i.memberId,
j2.date,
j2.price
FROM OPENJSON(#json) j1
CROSS APPLY OPENJSON(j1.value, '$.consumeRecord')
WITH (
date datetime,
price decimal(9,2)
) j2
JOIN #ids i ON i.jsonIndex = j1.[key];
db<>fiddle
Unfortunately, INSERT only allows you to OUTPUT from the inserted table, not from any non-inserted columns. So we need to hack it with a weird MERGE

Why does this happen when inserting thousands of rows into an Oracle table with Node.js?

I am new in Oracle and I am looking for the best way for insert thousands (maybe millions) of records into a table.
I have seen other questions and answers about this situation, but in this answer the PL/SQL code use TWO associate arrays of scalar types (PSL_INTEGER) and works as table columns, I need the same but with ONE nested table of record/complex type for insert in the table as a row.
First of all, I have this code in Node.js (TypeScript) using the oracledb package (v 5.1.0):
let data: Array<DataModel>;
// data's variable is populated with data and 'DataModel' is an interface,
// data is an array with a the exact table's structure:
// [
// { C_ONE: 'mike', C_TWO: 'hugman', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'robert', C_TWO: 'zuck', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'john', C_TWO: 'gates', C_THREE: '34', ... with other 12 columns }
// ]
let context;
try {
context = await oracledb.getConnection({
user: 'admin',
password: 'admin',
connectString: 'blabla'
});
const result = await context.execute(
// My SP
'BEGIN PACKAGE_TEST.SP_TEST_STRESS(:p_data, :p_status); END;',
{
// My JSON Array
p_data: {
type: 'PACKAGE_TEST.T_STRESS',
val: data
},
// Variable for check if all success or fails... this doesn't matters :)
p_status: {
type: oracledb.NUMBER,
val: 1,
dir: oracledb.BIND_OUT
}
},
{ autoCommit: true }
);
console.log(result);
if ((result.outBinds as { p_status: number }).p_status === 0) {
// Correct
}
else {
// Failed
}
} catch (error) {
// bla bla for errors
} finally {
if (context) {
try {
await context.close();
} catch (error) {
// bla bla for errors
}
}
}
And the PL/SQL code for my sotore procedure:
CREATE OR REPLACE PACKAGE PACKAGE_TEST
IS
TYPE R_STRESS IS RECORD
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
TYPE T_STRESS IS VARRAY(213627) OF R_STRESS;
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
);
END;
/
CREATE OR REPLACE PACKAGE BODY PACKAGE_TEST
IS
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
)
IS
BEGIN
DBMS_OUTPUT.put_line('started');
BEGIN
FORALL i IN 1 .. P_DATA_FOR_PROCESS.COUNT
INSERT INTO TEST_STRESS
(
C_ONE,
C_TWO,
C_THREE,
C_FOUR,
C_FIVE,
C_SIX,
C_SEVEN,
C_EIGHT,
C_NINE,
C_TEN,
C_ELEVEN,
C_TWELVE,
C_THIRTEEN,
C_FOURTEEN,
C_FIVETEEN,
C_SIXTEEN
)
VALUES
(
P_DATA_FOR_PROCESS(i).C_ONE,
P_DATA_FOR_PROCESS(i).C_TWO,
P_DATA_FOR_PROCESS(i).C_THREE,
P_DATA_FOR_PROCESS(i).C_FOUR,
P_DATA_FOR_PROCESS(i).C_FIVE,
P_DATA_FOR_PROCESS(i).C_SIX,
P_DATA_FOR_PROCESS(i).C_SEVEN,
P_DATA_FOR_PROCESS(i).C_EIGHT,
P_DATA_FOR_PROCESS(i).C_NINE,
P_DATA_FOR_PROCESS(i).C_TEN,
P_DATA_FOR_PROCESS(i).C_ELEVEN,
P_DATA_FOR_PROCESS(i).C_TWELVE,
P_DATA_FOR_PROCESS(i).C_THIRTEEN,
P_DATA_FOR_PROCESS(i).C_FOURTEEN,
P_DATA_FOR_PROCESS(i).C_FIVETEEN,
P_DATA_FOR_PROCESS(i).C_SIXTEEN
);
EXCEPTION
WHEN OTHERS THEN
p_status := 1;
END;
P_STATUS := 0;
END;
END;
And my target table:
CREATE TABLE TEST_STRESS
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
An intersting behavior happens with this scenario:
If I send my JSON Array with 200 rows, this works perfectly, I don't know the exact time it takes to complete successfully, but
I can tell it's milliseconds.
If I send my JSON Array with 200,000 rows, this takes three or four minutes to wait, the promise is resolved and it throws me an exception of type: ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
This happens when passing the JSON Array to the procedure parameter, it seems that when processing it it will cost too much.
Why does this happen in the second scenario?
Is there a limitation on
the number of rows in the NESTED TABLE TYPES or is any configuration (default) with Node.js?
Oracle suggests
increasing pga_aggregate_limit but seeing it in my SQLDeveloper with
"show parameter pga;" It is 3G, does it mean that the information I
am sending is exceeding 3 GB? Is normal?
Is there a more viable solution that
does not affect the performance of the database?
Appreciate your help.
Each server process gets its own PGA, so I'm guessing this is causing the total aggregate PGA, over all the processes currently running, to go over 3 GB.
I assume this is happening because of what's going on inside your package, but you only show the specification, so there's no way to tell what's happening there.
You're not using a nested table type. You're using a varray. A varray has a maximum length of 2,147,483,647.
It sounds like you're doing something inside your procedure to use too much memory. Maybe you need to process the 200,000 rows in chunks? With no more information about what you're trying to do, can you use some other process to load your data, like sqlldr?

Import Export data from a CSV file to SQL Server DB?

I am working on a web application where i have to do a import export functionality. i am new in this so i want your suggestions and there are some issues that i am facing to do this functionality,
So first i have a JSON array from Frontend (AngularJs)
[
{
"MyName": "Shubham",
"UserType": "Premium",
"DialCode": "India",
"ContactNumber": "9876543210",
"EmailAddress": "Contact#Shubh.com"
"Country": "India",
"Notes": "Notes-Notes-Notes-Notes"
},
{
"MyName": "Shubham 2",
"UserType": "Free Trial",
"DialCode": "India",
"ContactNumber": "123456789",
"EmailAddress": "Contact2#Shubh.com"
"Country": "India",
"Notes": "Notes-Notes-Notes-Notes"
} ]
Now i am converting this array to a XML in NodeJs Using XMLbuilder
Like This
<UserXML>
<MyName>Shubham</MyName>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes</Notes>
</UserXML>
<UserXML>
<MyName>Shubham 2</MyName>
<UserType>Free Trial</UserType>
<DialCode>India</DialCode>
<ContactNumber>123456789</ContactNumber>
<EmailAddress>Contact2#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes2-Notes2-Notes2-Notes2</Notes>
</UserXML>
Now i am using this XML in SQL Server to insert these 2 records into my user table
now the issue is i have another table where country code and country name is saved i am using Foreign Key ref in my user table
i have made a sample code
DROP TABLE IF EXISTS #Country
DROP TABLE IF EXISTS #user
DROP TABLE IF EXISTS #UserType
CREATE TABLE #Country
(
Id INT PRIMARY KEY identity(1 ,1),
Name VARCHAR(50) NOT NULL,
DialCode VARCHAR(50) NOT NULL
)
CREATE TABLE #UserType
(
Id INT PRIMARY KEY identity(1 ,1),
Name VARCHAR(50) NOT NULL
)
CREATE TABLE #user
(
Id INT PRIMARY KEY IDENTITY(1 ,1),
Name VARCHAR(50) NOT NULL,
UserTypeId INT NOT NULL,
DialCodeId INT NOT NULL,
ContactNumber VARCHAR(50) NOT NULL,
EmailAddress VARCHAR(50) NOT NULL,
CountryId INT NOT NULL,
Notes VARCHAR(50) NOT NULL
FOREIGN KEY(CountryId) REFERENCES #Country(Id),
FOREIGN KEY(UserTypeId) REFERENCES #UserType(Id)
);
INSERT INTO #Country (Name,DialCode)
VALUES ('India','+91'),
('Dubai','+971'),
('U.S','+1') ;
INSERT INTO #UserType (Name)
VALUES ('Premium'),
('Free Trial');
CASE 1 (Working Fine) if i have a single record then there is no issue by using this apporch
declare #xml xml = '<UserXML>
<Name>Shubham CASE-1</Name>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes CASE-1</Notes>
</UserXML>'
Now i have to check the country name/User Type and match it with the country table to get the id
DECLARE #CountryId INT
,#UserType INT
SELECT #CountryId = id FROM #Country WHERE Name LIKE ''+(select U.Items.value('./DialCode[1]','NVARCHAR(200)') as DialCode FROM #xml.nodes('/UserXML') U(Items))+'%'
SELECT #UserType = id FROM #UserType WHERE Name LIKE ''+(select U.Items.value('./UserType[1]','NVARCHAR(200)') as UserType FROM #xml.nodes('/UserXML') U(Items))+'%'
INSERT INTO #user
SELECT
U.Item.query('./Name').value('.','VARCHAR(100)') Name,
#UserType,
#CountryId,
U.Item.query('./ContactNumber').value('.','VARCHAR(100)') ContactNumber,
U.Item.query('./EmailAddress').value('.','VARCHAR(100)') EmailAddress,
#CountryId,
U.Item.query('./Notes').value('.','VARCHAR(100)') Notes
FROM #Xml.nodes('/UserXML') AS U(Item)
CASE-2 (Well the Isssue is here) if i have multiple records then how can i check every node and then make a join or something like that to make my insert query work fine
declare #xml2 xml = '<UserXML>
<Name>Shubham CASE-2</Name>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes CASE-2</Notes>
</UserXML>
<UserXML>
<Name>Shubham 2 CASE-2</Name>
<UserType>Free Trial</UserType>
<DialCode>Dubai</DialCode>
<ContactNumber>123456789</ContactNumber>
<EmailAddress>Contact2#Shubh.com</EmailAddress>
<Country>Dubai</Country>
<Notes>Notes2-Notes2-Notes2-Notes2 CASE-2</Notes>
</UserXML>'
DECLARE #CountryId2 INT
,#UserType2 INT
SELECT #CountryId2 = id FROM #Country WHERE Name LIKE ''+(select U.Items.value('./DialCode[1]','NVARCHAR(200)') as DialCode FROM #xml2.nodes('/UserXML') U(Items))+'%'
SELECT #UserType2 = id FROM #UserType WHERE Name LIKE ''+(select U.Items.value('./UserType[1]','NVARCHAR(200)') as UserType FROM #xml2.nodes('/UserXML') U(Items))+'%'
INSERT INTO #user
SELECT
U.Item.query('./Name').value('.','VARCHAR(100)') Name,
#UserType,
#CountryId,
U.Item.query('./ContactNumber').value('.','VARCHAR(100)') ContactNumber,
-- U.Item.query('./EmailAddress').value('.','VARCHAR(100)') EmailAddress,
#CountryId,
U.Item.query('./Notes').value('.','VARCHAR(100)') Notes
FROM #xml2.nodes('/UserXML') AS U(Item)
Please If you have any suggestions Or any other better approach for doing this task then help me out i am new to this so i don't know about the best approach for doing this task
You can read JSON directly and perform the JOIN
SELECT entity.*
FROM OPENROWSET (BULK N'd:\temp\file.json', SINGLE_CLOB) as j
CROSS APPLY OPENJSON(BulkColumn)
WITH(
MyName nvarchar(100)
,UserType nvarchar(100)
,DialCode nvarchar(100)
,ContactNumber nvarchar(100)
,EmailAddress nvarchar(100)
,Country nvarchar(100)
,Notes nvarchar(500)
) AS entity
More infos about import JSON documents and the OPENJSON

DynamoDB Table query items using global secondary index

I am trying to query a dynamo table with latitude and longitude for various locations. I want to get the values between certain coordinates as a user pans on the map.
The primary key for the table is city and the sort key is id. I created a global secondary index with lat as the partition key and lon as the sort key (to query for locations between two points in latitude and longitude).
I am trying to use this query:
let doc = require('dynamodb-doc');
let dynamo = new doc.DynamoDB();
...
var params = {
TableName : "locations-dev",
IndexName: "lat-lon-index",
KeyConditionExpression: "lon between :lon2 and :lon1 AND lat between :lat1 and :lat2",
ExpressionAttributeValues: {
":lat1": JSON.stringify(event.bodyJSON.east),
":lat2": JSON.stringify(event.bodyJSON.west),
":lon1": JSON.stringify(event.bodyJSON.north),
":lon2": JSON.stringify(event.bodyJSON.south)
}
};
dynamo.query(params, function (err, data) {
if (err) {
console.error('Error with ', err);
context.fail(err);
} else {
context.succeed(data);
}
});
But I am getting this error:
{
"errorMessage": "Query key condition not supported",
"errorType": "ValidationException",
"stackTrace": [
...
]
}
Here is an example item in Dynamo:
{
"id": "18",
"lat": "39.923070",
"lon": "-86.036178",
"name": "Home Depot",
"phone": "(317)915-8534",
"website": "https://corporate.homedepot.com/newsroom/battery-recycling-one-million-pounds"
}
Primary keys (even in secondary indices) in DynamoDB can only be queried with equals criteria. This constraint is derived from its internal representation since it is stored as hashed value to identify its item partition. Those hashed values cannot be queried by range.
Choosing the Right DynamoDB Partition Key
Except for scan, DynamoDB API operations require an equal operator
(EQ) on the partition key for tables and GSIs. As a result, the
partition key must be something that is easily queried by your
application with a simple lookup (for example, using key=value, which
returns either a unique item or fewer items).

Cassandra polymorphism

How can I do in Cassandra polymorphism?
For example, as shown below store json?
The first type, when the attachments added photos
{
"id": 1,
"text": "Hello",
"read": true,
"attachment_type": 1,
"attachment": {
"photo_id": 1,
"photo_big_url": "htt://photo.com/1_big.jpg",
"photo_medium_url": "htt://photo.com/1_medium.jpg",
"photo_small_url": "htt://photo.com/1_small.jpg"
}
}
And the second type, when the attachments added videos
{
"id": 2,
"text": "Hi",
"read": true,
"attachment_type": 2,
"attachment": {
"video_id": 1,
"video_url": "htt://video.com/1.mpg",
}
}
Sure, you can combine your fields in a single Cassandra table:
CREATE TABLE media (
id int,
name text,
read boolean,
attachment_type int,
photo_id int,
photo_big_url text,
photo_medium_url text,
photo_small_url" text,
video_id int,
video_url text,
PRIMARY KEY ((id), attachment_type)
);
Fields that are not completed are not stored in Cassandra, so you are not wasting hard disk space.
Depending on the driver you are using, you can declare different entities for Photo and Video, each having only the relevant fields, and map your results to entities accordingly.
With the DataStax driver for C# it would be something like:
mapper.Fetch<Photo>("SELECT * FROM media WHERE id = 1 and attachment_type = 1");
mapper.Fetch<Video>("SELECT * FROM media WHERE id = 1 and attachment_type = 2");

Resources