Does JOOQ support dialect for "SQL DataWarehouse"?
Any pointers .
From a jOOQ perspective, SQL Data Warehouse is just another flavour of SQL Server as can be seen in the documentation:
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-reference-tsql-statements
Like SQL Server, SQL Data Warehouse implements parts of the T-SQL language
I question why you're planning to use a Java-focussed query generator with an MPP data warehouse. If you're intending to use it for updates, deletes, etc. in some kind of ETL flow, you're going to be in a world of pain.
Nothing wrong with JOOQ, but maybe not the right technology for interacting with ASDW.
Related
I am looking for a safe solution to inject SQL queries generated by knex package into the existing SQL code. I was considering using a knex.queryBuilder.toQuery() method as it prints out a string but I'm not sure how reliable this is in terms of SQL injection.
Maybe someone else has a better idea on how to approach this subject? Rewriting existing SQL is not an option.
Thanks.
I'm considering using a toQuery method.
I started working on a Java project where the chosen database was the Azure Cosmos DB SQL API, so reading the SQL API Cosmos DB introduction I understood that that SQL, in this case, is only for query and not for data manipulation(insert, delete).
The question is: Does it make sense to use a schema migration tool like Flyway/Liquibase for this kind of database?
CosmosDb does not have any support for schemas at the database level. It's schema free with an indexing mechanism that allows for efficient querying of arbitrary JSON data. As such, a SQL schema migration tool doesn't make sense in this context and wouldn't work anyways. It's up to your application code to ensure that data is normalized and migrated to new formats if necessary.
Little late to the party but I think this might help: https://github.com/liquibase/liquibase-cosmosdb. It's an extension for Liquibase for Cosmos DB. So, pretty much what you were looking for!
I'm looking at some incredibly complex views in a Sql Database. I've found several columns that are in the join or where clause that are not indexed. Needless to say, this is a performance hit.
Is there a way, preferably in SSMS, to have it tell me any column that is not indexed that should be.
This is Azure Sql Database, not Sql Server.
thanks - dave
Do you know you just need to enable "Automatic Tuning" on Azure SQL Database? It will create all missing indexes for you and will automatically verify that performance of queries has improved. It will also drop unnecessary indexes and force the optimizer to use the best query plan. All this is just one click away. Learn how to enable it here.
This is the benefit for developers of using PaaS database services.
So here's where I'm at.
I'm storing huge amounts of data in Data Lake Store. But when I want to make a report (it can be a month's worth), I want to schematize it into a table to refer to over and over again when querying upon it.
Should I just use the built in database feature that Data Lake Analytics provides by creating U-SQL tables (https://msdn.microsoft.com/en-us/library/azure/mt621301.aspx) or should I create this table in SQL Data Warehouse? I guess what I really want to know is what are the pros and cons of either case and when is it best to use either?
By the way, I'm a noob in this Microsoft Azure world. Still actively learning.
At this point it depends on what you want to do with the data.
If you need interactive report queries, then moving the data into a SQL DB or DW schema is recommended at this point until ADLA provides interactive query capabilities.
If you need the tables during your data preparation steps, want to use partitioning to manage data life cycles, need to run U-SQL queries that can benefit from the clustering and data distribution offered by U-SQL tables, you should use U-SQL tables.
i want to migrate unidata database which is multivalue to sql using dotnet code.IS this possible,one of the possibility is through SSIS but this will consume lot of time becouse we have to do ETL process to all the tables in DB .So was looking for a dot net code where i can connect to Unidatadb and migrate data to sql
You're probably getting downvoted because this is an awfully general question, and it's not particularly a programming question, but rather a big project.
One piece of advice is to flip things around and extract the information from the Unidata side "exploding" out the multivalues into flat tables that your ETL process can consume. And the challenge there (apart from writing Unibasic code) is identifying which multivalued fields are associated with each other. Unless you have very good documentation that can be tough to do.