How do I enable materialized views in Cassandra? - cassandra

create materialized view if not exists s.emp
as
select id, count(name) as count from employee primary key (name);
Query 1 ERROR: Materialized views are disabled. Enable in cassandra.yaml to use.

You'll need to add the following line to cassandra.yaml to enable materialised views:
materialized_views_enabled: true
You will need to enable it on all nodes of the cluster then perform a rolling restart for the change to take effect.
Note that MVs are considered experimental which is why they are disabled by default. Be aware of the pros and cons of MVs before using it in production as I've discussed in this post -- https://community.datastax.com/articles/2774/.
For more information on the experimental status of materialised views, see the entries in NEWS.txt on Github. Cheers!

This worked!
open the cassandra.yaml file, in mac if you have installed cassandra using
brew install cassandra
go to path /usr/local/etc/cassandra will find the cassandra.yaml file. Search for materialized view in that file.
and in place of
enable_materialized_views: false
by default will be false, change it to true
enable_materialized_views: true

Navigate to /etc/cassandra
Open and edit file: cassandra.yaml , you can use command "nano cassandra.yaml"
search for materialized_views_enabled: false and set it to true. By default, its usually near the bottom of the file.

Related

How to force an ADX to update from source with all historical data

Does anyone know a way to force a child table to update from the source table? it would be a one off command to run when the child table is created, then we have an auto update policy in place.
Googling suggests to try this, however it produces a syntax error
.update policy childTable with (sourceTable)
Thanks!:)
Update policy is a mechanism that works during the ingestion and does not support backfill.
You can consider using materialized view with backfill property (if your transformation logic falls under the limitations) or create the child table based on a query, using the .set command.
If your source table is huge you might need to split it to multiple commands.
We had to use this:
.append childTable <| updateFunction

Write data frame to hive table in spark

could you please tell me if this command could create problems with overwriting all tables in the DB:
df.write.option(“path”, “path_to_the_db/hive/”).mode(overwrite).saveAsTable("result_data")
table_name is a new table in the DB, it hasn't existed.
After these commands, all tables disappeared.
I was using Spark3 and tried to solve an error:
Can not create the managed table('result_data').
The associated location('dbfs:/user/hive/warehouse/result_data') already exists.
I expected that a new table will be created without any issues if it doesn’t exist.
If path_to_the_db/hive contains other tables, then you overwrite into that folder, it seems possible that the whole directory would be emptied first, yes. Perhaps you should instead use path_to_the_db/hive/result_data
According to the error, though, your table does already exist.
You can use Spark to register a temporary table in SQL code, then run INSERT OVERWRITE query for existing tables.

Does sequelize support SQL Server views?

I'm working on a project that I need to define a model from SQL Server views. Is it possible to define a model from views according to not to use table joins and decrease complexities.
Yes, you can point a Sequelize model at a view, rather than a table at least for SELECT statements (e.g. model.findAll()) I doubt that model.sync() would work against a view, but haven't tested. IIRC, some databases allow INSERT, UPDATE or DELETE against views in limited cases) and in those limited cases you might be able to use model.create(), model.update() or model.delete() also.
sequelize.sync() creates the models assuming that you only want to work with tables and tries to create a table when you actually want a view... as per the example below
Error message when the view already exists
In OBJECT_ID('V_Funds','U') 'U' is for Table.(cf. image)
So forget about using sequelize.sync() and just use Sequelize-Auto (as per the comments above). Then just run a simple .bat or .cmd such as the one below and all is done for you automatically
rem recreate the various models
node node_modules\sequelize-auto\bin\sequelize-auto -h localhost -d XXXX -e mssql -u sa -x PASSWORD -c "./config.json" -o "./../server/src/models"

Cassandra create and load data atomicity

I have got a web service which is looking for the last create table
[name_YYYYMMddHHmmss]
I have a persister job that creates and loads a table (insert or bulk)
Is there something that hides a table until it is fully loaded ?
First, I have created a technical table, it works but I will need one by keyspace (using cassandraAuth). I don’t like this.
I was thinking about tags, but it doesn’t seem to exist.
- create a table with tag and modify or remove it when the table is loaded.
There is also the table comment option.
Any ideas?
Table comment is a good option. We use it for some service information about the table, e.g. table versions tracking.

Is it possible to remove records from an itemtype using Flexible search service?

Is it possible to remove records from an itemtype using Flexible search service? As far as i know, Flexible search is only used for SELECT Operations.
Please suggest ways to remove records from an itemtype using a cron job? Thanks, Appreciate a lot.
Flexible search queries are not made to manipulate data (source), if you want to delete data you could :
Use the Model Service
Run an impex file with the remove header (source)
You can simply use sql query as well to remove the data.
Go to HAC -> Console -> Flexible Search
Switch the tab to SQL Query
Execute SQL delete query (DELETE FROM table_name)
Make sure you run query in commit mode otherwise Hybris will rollback the changes.
You can create a Groovy script to delete all the records in a table using the modelService as mentioned aboved.
Other option if you want to delete massively records from a table is to run an impex with the batchmode setted to true. ej:
REMOVE StockLevel[batchmode=true];itemtype(code)[unique = true]
;StockLevel
Where the itemtype is the parameter that will be use to delete the table, you can change that parameter for the one that fills your needs of course.

Resources