Create view in SQL Server CE 3.5 - sql-server-ce-3.5

I'm using SQL Server CE as my database.
Can I create a view in SQL Server CE 3.5 ? I tried to create but its saying create view statement not supported.
In my application I have table called Alarm with 12 columns. But I'm always accessing only
three columns. So I want to create the view with those three columns.
Will it improve performance?

It appears that SQL Server Compact Edition does indeed not support creation of views.
But if you're selecting only three columns from your table, a view will not help you here at all.
If you have a view AlarmView which is defined as
CREATE VIEW dbo.AlarmView
AS
SELECT Col1, Col2, Col3 FROM dbo.Alarm
then selecting from that view (`SELECT * FROM dbo.AlarmView WHERE ......) essentially becomes
SELECT Col1, Col2, Col3 FROM dbo.Alarm
WHERE ........
so you get the same statement you'd write yourself.
Views are not designed for performance gains mostly (it helps a little bit that using a view, you're limiting the number of columns that are returned in your SELECT) - they're designed for limiting / modelling access to the tables, e.g. you could grant some user SELECT permission on the view but not on the underlying table, so that user would never be able to see / select any of the other columns.

Related

Using ADF, how to get file names loaded into a SQL Server table?

I am trying to use the GetMetadata activity and a CopyData activity together. My setup looks like this.
I am trying to get files names (using GetMetadata) and load these into a field in a SQL Server table (in conjunction with the CopyData). The CopyData works perfectly fine, but I don't see any way to have the GetMetadata get file names and pass those into a field in a table. In my example, I have 4 fields in the source data which match 4 fields in the destination table. The 5th field, presumably, will be the file name. Apparently, it doesn't really work like this. I read through the documentation below and I still can't figure it out.
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity
Update July 2020
A new feature has been added to the Copy activity recently to allow you add columns, $$FILEPATH is currently the only supported variable. See here for more detail:
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
Original Answer
Adding an extra column to a dataset might be considered Transform and the Azure Data Factory v2 (ADF v2) Copy Task does not lend itself easily to Transform. It can do a couple of things like convert from one format (eg csv) to other formats (eg JSON) but it is limited. Maybe at some point in the future they add something to the mapping which allows adding string literals or something similar to the SSIS Derived Column feature, but these type of features are getting added to Mapping Data Flows at the moment it seems.
One way to achieve this however, is to use a stored procedure target with a parameter for the filename and a table-type parameter for the main dataset. It looks a bit like this:
The downside is you now have to create a supporting table-type in your database (CREATE TYPE) and a stored proc to handle it, something like this:
CREATE TYPE dbo.typ_multiFile AS TABLE (
col1 CHAR(1) NOT NULL,
col2 CHAR(1) NOT NULL,
col3 CHAR(1) NOT NULL
)
GO
CREATE OR ALTER PROC dbo.usp_ins_myTable (
#fileName AS VARCHAR (100),
#typ AS dbo.typ_multiFile READONLY
)
AS
SET NOCOUNT ON
INSERT INTO dbo.myTable ( [fileName], col1, col2, col3 )
SELECT #fileName, col1, col2, col3
FROM #typ
RETURN
GO
Note the Copy Task is inside a ForEach task, as per this diagram:

Alternative to External Table Join in Azure Database

I have currently setup multiple Azure DBs on a single server primarily with different schemas.
External Queries work perfectly fine on small tables. I am currently using an INNER JOIN on multiple tables within 2 DBs.
This works great for small tables with limited data sets since it appears to be physically copying the tables over to a temp table then performing the query.
However when I do a join on a large table ~500K rows the query will fail as the size of the table causes a timeout while it tries to copy the table to the temp directory.
Is there a way to execute the query without copying the JOIN table to a temp directory?
I have previously tried to create Stored procedures on the DB with the Large Table I am trying to join, however that DB will eventually be sunset and I will be back where I am now so I would like a longer term solution.
Alternately consider consolidating your separate databases into one single database, eg using schemas to provide separation. Ask yourself the question "Why are my databases split?" or is having to join across them an occasional thing? Do you need them to be split. If having to join across them is a regular task then consolidating them makes sense.
Alternately consider Managed Instance. This is a PaaS service but gives you an experience closer to traditional SQL Server. First off, you can have multiple databases in once instance and cross-database joins are as easy as they are in box product SQL Server.
I ended up using a CROSS APPLY (Inner Join) and OUTER CROSS APPLY (Left Join) then adding the logic to a where statement within
Select c.Id, a.Quantity, b.Name
From Table1 a CROSS APPLY
(Select * FROM Table2 WHERE x = y) c
INNER JOIN Table3 b ON c.x = b.x
Add Join
Add Join
This executed the joins on the target without having to bring the whole table into a TempTable.

How do I access the Databricks table format column

I can display the Databricks table format using: DESCRIBE {database name}.{table name};
This will display something like:
format id etc.
hive null ...
Is there a way to write a SQL statement like:
SELECT FORMAT FROM {some table} where database = {db name} and table = {table name};
I would like to know if there is a Databricks catalog table that I can query directly. I want to list all of the Databricks tables that have a "format = 'delta'".
Unlike a relational database management system, there is no system catalog to query this information directly.
You need to combine 3 spark statements with python dataframe code to get the answer you want.
%sql
show databases
This command will list all the databases (schemas).
%sql
show tables from dim;
This command will list all the tables in a database (schema).
%sql
describe table extended dim.employee
This command will return detailed information about a table.
As you can see, we want to pick up the following fields (database, table, location, provider and type) for all tables in all databases. Then filter for type 'delta'.
Databricks has the unity catalog in public preview right now.
https://learn.microsoft.com/en-us/azure/databricks/data-governance/unity-catalog/
Databricks has implemented the information schema that is in all relational database management systems. This is part of this new release.
https://docs.databricks.com/sql/language-manual/sql-ref-information-schema.html
In theory, this statement would bring information back on all tables if the unity catalog was enabled in my service. Since it is not enabled, the query processor does not understand my request.
In short, use spark.sql() and dataframes to write a program to grab the information. But this is a lengthy task. A easier alternative is to use the unity catalog. Make sure it is available in your region.
To return the table in format method, we generally use “Describe Formatted”:
DESCRIBE FORMATTED [db_name.]table_name
DESCRIBE FORMATTED delta.`path-to-table` (Managed Delta Lake)
You cannot use select statement to get the format of the table.
The supported SQL – select statements.
SELECT * FROM boxes
SELECT width, length FROM boxes WHERE height=3
SELECT DISTINCT width, length FROM boxes WHERE height=3 LIMIT 2
SELECT * FROM VALUES (1, 2, 3) AS (width, length, height)
SELECT * FROM VALUES (1, 2, 3), (2, 3, 4) AS (width, length, height)
SELECT * FROM boxes ORDER BY width
SELECT * FROM boxes DISTRIBUTE BY width SORT BY width
SELECT * FROM boxes CLUSTER BY length
For more details, refer “Azure Databricks – SQL Guide: Select”.
Hope this helps.

How to migrate data from one table to different tables using conditions in TALEND

I have a requirement where i need to migrate data from one table of oracle DB to different tables based on condition like if tableA contains value A in one column then insert it into tableA else insert it into tableB. Can we do this using TALEND.
Someone please guide me.
Yes you can do conditional load in Talend. and based on your scenario you can use filter expression of Talend to do it. check screen for more details.
add two oracle output for loading into table A and table B like below screen.

How can I list all tables in a database with Squirrel SQL?

I use Squirrel SQL to connect to a JavaDB/Derby database on my desktop. I can run SQL queries.
But how can I list all tables in the database? And preferably all column and column types.
Sometimes I noticed that doing the above may not result in the tables showing. Before I figured this out, my table node will not be expandable and I can never get a list of the tables.
After a lot of searching on the internet, I learnt that you need to choose the schema from the catalog drop down box located at the upper left portion of the squirrel sql client before the icons to be able to get the table list for that particular schema.
Hope that helps.
You can do it easily from the GUI. After you open your session, click the Objects tab, then expand the tree. Expand the db, schema, and then table nodes, and you'll see all of your tables. If you click on a particular table node, a table will open to the right. By clicking the Columns tab, you can get the column names, types, and other meta data.
Or are you looking for SQL commands?
I know this is quite an old question. I was stuck with this for the last 3 days (google search results didn't help) I'm using Squirrel 3.4 and had to connect to a old DB2 database. I could connect to the DB but could not see the tables for 3 days. Finally got it, here is what worked for me -
Edit Alias Properties -> click properties - select the radio button
"Specify schema loading and caching" -> click on "Connect database
and refresh Schema table".
Once you do this all the schema's are loaded in the pop up window.
Select the ones you need and change the option to 'Load and cache'.
Reconnect to this session.
Select the schema name from the catalog drop down and refresh
We had this issue using SQuirreL SQL Client with Amazon Redshift PostgreSQL.
A short-term solution just was to use:
SELECT * FROM information_schema.columns
RJ.'s solution worked for some machines (thanks) and not others
In the end we realized it was a driver issue. We needed
postgresql-8.4-...jar from http://jdbc.postgresql.org/download.html#others

Resources