While attempting to create a list of all ID's made since _____ I am able to get the results I want from the following:
DECLARE #BoID varchar(max)
SELECT #BoID = COALESCE(#BoID + ', ', '') +
CAST(ApplicationID AS varchar(10))
FROM BoList as "ID"
WHERE CreatedDate > '2017-07-01 18:14:09.210'
However, I am having issues with establishing a column name for the above statement. Where does the as "ID" need to be located at in order to give the above result a column name of "ID"?
As the query stands now, you are giving the table BoList an alias of "ID" instead of the column. Since you are selecting the value into a variable there is no output. You can do it like this...
SELECT COALESCE(#BoID + ', ', '') +
CAST(ApplicationID AS varchar(10)) as "ID"
FROM BoList
WHERE CreatedDate > '2017-07-01 18:14:09.210'
Or if you really do need to stash the value in a variable to return later as part of another query...
DECLARE #BoID varchar(max)
SELECT #BoID = COALESCE(#BoID + ', ', '') +
CAST(ApplicationID AS varchar(10))
FROM BoList
WHERE CreatedDate > '2017-07-01 18:14:09.210'
SELECT #BoID AS "ID", other columns... FROM whatever
Related
Using PostgreSQL I can have multiple rows of json objects.
select (select ROW_TO_JSON(_) from (select c.name, c.age) as _) as jsonresult from employee as c
This gives me this result:
{"age":65,"name":"NAME"}
{"age":21,"name":"SURNAME"}
But in SqlServer when I use the FOR JSON AUTO clause it gives me an array of json objects instead of multiple rows.
select c.name, c.age from customer c FOR JSON AUTO
[{"age":65,"name":"NAME"},{"age":21,"name":"SURNAME"}]
How to get the same result format in SqlServer ?
By constructing separate JSON in each individual row:
SELECT (SELECT [age], [name] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
FROM customer
There is an alternative form that doesn't require you to know the table structure (but likely has worse performance because it may generate a large intermediate JSON):
SELECT [value] FROM OPENJSON(
(SELECT * FROM customer FOR JSON PATH)
)
no structure better performance
SELECT c.id, jdata.*
FROM customer c
cross apply
(SELECT * FROM customer jc where jc.id = c.id FOR JSON PATH , WITHOUT_ARRAY_WRAPPER) jdata (jdata)
Same as Barak Yellin but more lazy:
1-Create this proc
CREATE PROC PRC_SELECT_JSON(#TBL VARCHAR(100), #COLS VARCHAR(1000)='D.*') AS BEGIN
EXEC('
SELECT X.O FROM ' + #TBL + ' D
CROSS APPLY (
SELECT ' + #COLS + '
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
) X (O)
')
END
2-Can use either all columns or specific columns:
CREATE TABLE #TEST ( X INT, Y VARCHAR(10), Z DATE )
INSERT #TEST VALUES (123, 'TEST1', GETDATE())
INSERT #TEST VALUES (124, 'TEST2', GETDATE())
EXEC PRC_SELECT_JSON #TEST
EXEC PRC_SELECT_JSON #TEST, 'X, Y'
If you're using PHP add SET NOCOUNT ON; in the first row (why?).
Could somebody pls help me to perform insert operation to postgres database from python?
I have a dataframe:
df_routes
From
To
A
B
B
A
for index, row in df_routes.iterrows():
cursor.execute("insert into table(id, version, create_ts, created_by, station_from_code, station_to_code, station_from_icao_code, station_to_icao_code) select newid() as id, 0 as version, current_timestamp as create_ts, 'source_py' as created_by, ds_dep.station_code as station_from_code, ds_arr.station_code as station_to_code, ds_dep.icao_code as station_from_icao_code, ds_arr.icao_code as station_to_icao_code from dictionary ds_dep join dictionary ds_arr on ds_dep.station_code = %s and ds_arr.station_code = %s and ds_dep.delete_ts is null and ds_arr.delete_ts is null where not exists (select null from tsp_ams_navigation_route nr where nr.station_from_code = %s and nr.station_to_code = %s and nr.delete_ts is null)",(row.station_from_code, row.station_to_code, row.station_from_code, row.station_to_code ))
conn.commit()
print('ROUTES inserted '+ row.station_from_code + '- ' + row.station_to_code)
This piece of code does not work. Execution is successful, but no rows inserted. Please assist me.
Thanks!
I have a stored procedure in AZURE SQL database.In that there is a requirement to insert the records into the remote table from #temp table.
As xxxx_table is in the remote database used sp_execute_remote.
below is the scenario:
Create Procedure SP1 parameter1, Parameter2
As
select Distinct B.column1, B.Column2
into #A
from (Query1
Union
Query2) B
if (select count(1) from #A) > 0
Begin
Exec sp_execute_remote #data_source_name = N'Remotedatabase',
#stmt = N'INSERT INTO [dbo].[xxxx_table]
SELECT DISTINCT
'xxx' AS 'column1',
'xxx as 'Column2',
'xxx' AS 'Column3',
'xxx' AS 'Column4',
'xxx' AS Column4
FROM #A A INNER JOIN table1 on A.Column1 = Table1.Column2'
End
)
Getting the syntax error as below:
Incorrect syntax near 'xxx'.
Where am i going wrong? or let me know if there is another way to achieve this.
If you need to dynamically build a string in SQL single-quote the whole sentence, or use 'some text' + 'another text' to concat sentences. If you must add single quote use a double single quote ''
Example:
DECLARE #param1 int;
DECLARE #param1 VARCHAR(10);
SET #param1 = 10;
SET #param2 = 'CCDOS87'
#Stmt = 'SELECT Field1 FROM TableName WHERE Field1 = '
+ CAST(#param1 AS VARCHAR(100))
+ ' AND Field1 = '''
+ param2
+ ''''; <- This is a single '
#stmt = N'INSERT INTO [dbo].[Error_table]
SELECT DISTINCT
xxx AS column1,
xxx as Column2,
xxx AS Column3,
xxx AS Column4,
xxx AS Environment
FROM #A A INNER JOIN table1 on A.Column1 = Table1.Column2'
update
If your tables are in different databases but in the same server use:
INSERT INTO SERVER.SCHEMA.TABLE_NAME
SELECT Something
FROM SERVER.SCHEMA.TABLE_NAME
I am trying to replace certain customer names in my data.
I was able to do SQL using Google BigQuery language to transform one part of the string another via the replace function for one particular string.
Replace(CustomerName, 'ABC', 'XYZ')
However, I have a couple more that I would need to use the replace function such that
Replace(CustomerName, 'PLO', 'Rustic')
Replace(CustomerName, 'Kix', 'BowWow')
and so on.
I've tried doing
Replace(CustomerName, 'ABC', 'XYZ') OR Replace(CustomerName, 'PLO', 'Rustic') OR Replace(CustomerName, 'Kix', 'BowWow')
but that got me an error message.
I've also tried
Replace(CustomerName, 'ABC', 'XYZ') AND Replace(CustomerName, 'PLO', 'Rustic') AND Replace(CustomerName, 'Kix', 'BowWow')
but that also got me an error message.
I am able to just use "case when statement" and then hardcode each one, but I'm wondering if there is a better/faster way to just use replace statement instead.
Thanks for your help.
The CASE WHEN option is pretty reasonable. Another option is to chain them together:
REPLACE(
REPLACE(
REPLACE(
CustomerName,
'ABC',
'XYZ'),
'PLO',
'Rustic'),
'Kix',
'BowWow')
Which one you pick really depends on the exact scenario. The chained REPLACE calls are probably faster, but they could overlap in weird ways (e.g., if the output to one replacement matches the input to a subsequent one). The CASE WHEN approach avoids that issue, but it's probably more expensive because you need to do one operation to find the substring and another to actually replace it.
Note that when you're using AND or OR, you're trying to combine the string output of REPLACE as if it were a boolean, which is why it's failing.
In cases when you have quite a number of replacements - chaining of REPLACEs can become not practical and annoying manual work.
Below addresses this potential issue (assuming you maintain Lookup table with pairs: Word, Replacement)
SELECT CustomerName, fixedCustomerName FROM JS(
// input table
(
SELECT
CustomerName, Replacements
FROM YourTable
CROSS JOIN (
SELECT
GROUP_CONCAT_UNQUOTED(CONCAT(Word, ',', Replacement), ';') AS Replacements
FROM ReplacementLookup
) ,
// input columns
CustomerName, Replacements,
// output schema
"[
{name: 'CustomerName', type: 'string'},
{name: 'fixedCustomerName', type: 'string'}
]",
// function
"function(r, emit){
var Replacements = r.Replacements.split(';');
var fixedCustomerName = r.CustomerName;
for (var i = 0; i < Replacements.length; i++) {
var pat = new RegExp(Replacements[i].split(',')[0],'gi')
fixedCustomerName = fixedCustomerName.replace(pat, Replacements[i].split(',')[1]);
}
emit({CustomerName: r.CustomerName,fixedCustomerName: fixedCustomerName});
}"
)
You can test it using below example
SELECT CustomerName, fixedCustomerName FROM JS(
// input table
(
SELECT
CustomerName, Replacements
FROM (
SELECT CustomerName FROM
(SELECT '1234ABC567' AS CustomerName),
(SELECT '12 34 PLO 56' AS CustomerName),
(SELECT 'Kix' AS CustomerName),
(SELECT '98 ABC PLO Kix ABC 76 XYZ 54' AS CustomerName),
(SELECT 'ABCQweKIX' AS CustomerName)
) YourTable
CROSS JOIN (
SELECT
GROUP_CONCAT_UNQUOTED(CONCAT(Word, ',', Replacement), ';') AS Replacements
FROM (
SELECT Word, Replacement FROM
(SELECT 'XYZ' AS Word, 'QWE' AS Replacement),
(SELECT 'ABC' AS Word, 'XYZ' AS Replacement),
(SELECT 'PLO' AS Word, 'Rustic' AS Replacement),
(SELECT 'Kix' AS Word, 'BowWow' AS Replacement)
)
) ReplacementLookup
) ,
// input columns
CustomerName, Replacements,
// output schema
"[
{name: 'CustomerName', type: 'string'},
{name: 'fixedCustomerName', type: 'string'}
]",
// function
"function(r, emit){
var Replacements = r.Replacements.split(';');
var fixedCustomerName = r.CustomerName;
for (var i = 0; i < Replacements.length; i++) {
var pat = new RegExp(Replacements[i].split(',')[0],'gi')
fixedCustomerName = fixedCustomerName.replace(pat, Replacements[i].split(',')[1]);
}
emit({CustomerName: r.CustomerName,fixedCustomerName: fixedCustomerName});
}"
)
Please note: there is still issue if result of one replacement matches the input to a subsequent replacement
I believe there are multiple ways to tackle this problem, and it depends on the size of your dataset, practicality of simply making a guiding table by hand and uploading it to BigQuery, and the granularity of the data you want to replace.
If your values are very granular, you can create a table with "from" and "to" values on different columns, and join that table with your main table, and retrieve those values very cleanly.
# Replace the support_table table with your actual table
WITH support_table AS (
SELECT "ABC" AS OldValue, "XYZ" AS NewValue
)
SELECT main_table.OldValue, support_table.NewValue FROM main_table
JOIN support_table ON main_table.old_value = support_table.old_value
Now, if you want to replace a big list of different values with something, you can use REGEXP_REPLACE with a string containing all possible values.
If you have a very big list of items, you can use
STRING_AGG in a table with all the values you want to replace, or skip the STRING_AGG step and create said string by hand.
Both of the snippets below result in "item1|item2|item3". Choose which is faster for you to do.
# Replace the values_to_replace table with your actual table
WITH values_to_replace AS (
SELECT "item1" AS ColumnWithItemsToReplace
UNION ALL
SELECT "item2"
UNION ALL
SELECT "item3"
)
SELECT STRING_AGG(ColumnsWithItemsToReplace,"|") FROM values_to_replace
SELECT r"item1|item2|item3"
STRING_AGG will retrieve all the values from a table or query and concatenate them using a separator of choice. If you use the pipe separator, you will be able to create a string like "item1|item2|item3|..."
For a regular expression, the pipe counts as "or", which means that the regex will interpret the string as "item1 or item2 or item3". Thus, if you pass that generated string to REGEXP_REPLACE as the values to be replaced, it will be considered valid.
Example code below:
REGEXP_REPLACE(
column_to_replace
,(SELECT STRING_AGG(ColumnWithItemsToReplace,"|") FROM `YourTable`)
,"Replacer"
)
Hope it helps.
I need a way to generically take a table and copy its data into a new table--basically the same thing that SELECT * INTO does in regular SQL Server. Is there a way to do this in SQL Azure? I only have the existing and new table names at this point.
I encountered the same problem and the author's answer is not very detailed, so I will give some more information, on how i solved it.
I needed to duplicate tables that start with a given prefix ('from_') into new tables with prefix ('to_').
Generate CREATE Statement
I use this query (found on stackoverflow) to generate all CREATE statements, for every table that starts with 'from_' prefix.
select 'create table [' + so.name + '] (' + o.list + ')' + CASE WHEN tc.Constraint_Name IS NULL THEN '' ELSE 'ALTER TABLE ' + so.Name + ' ADD CONSTRAINT ' + tc.Constraint_Name + ' PRIMARY KEY ' + ' (' + LEFT(j.List, Len(j.List)-1) + ')' END as query
OBJECTPROPERTY(object_id(TABLE_NAME), 'TableHasIdentity') as tablehasidentity
from sysobjects so
cross apply
(SELECT
' ['+column_name+'] ' +
data_type + case data_type
when 'sql_variant' then ''
when 'text' then ''
when 'ntext' then ''
when 'decimal' then '(' + cast(numeric_precision as varchar) + ', ' + cast(numeric_scale as varchar) + ')'
else coalesce('('+case when character_maximum_length = -1 then 'MAX' else cast(character_maximum_length as varchar) end +')','') end + ' ' +
case when exists (
select id from syscolumns
where object_name(id)=so.name
and name=column_name
and columnproperty(id,name,'IsIdentity') = 1
) then
'IDENTITY(' +
cast(ident_seed(so.name) as varchar) + ',' +
cast(ident_incr(so.name) as varchar) + ')'
else ''
end + ' ' +
(case when IS_NULLABLE = 'No' then 'NOT ' else '' end ) + 'NULL ' +
case when information_schema.columns.COLUMN_DEFAULT IS NOT NULL THEN 'DEFAULT '+ information_schema.columns.COLUMN_DEFAULT ELSE '' END + ', '
from information_schema.columns where table_name = so.name
order by ordinal_position
FOR XML PATH('')) o (list)
left join
information_schema.table_constraints tc
on tc.Table_name = so.Name
AND tc.Constraint_Type = 'PRIMARY KEY'
cross apply
(select '[' + Column_Name + '], '
FROM information_schema.key_column_usage kcu
WHERE kcu.Constraint_Name = tc.Constraint_Name
ORDER BY
ORDINAL_POSITION
FOR XML PATH('')) j (list)
where xtype = 'U'
AND name NOT IN ('dtproperties') AND name like 'from_%'
This query results in a set of values:
['query'] = create table [from_users_roles] ( [uid] int NOT NULL DEFAULT ((0)), [rid] int NOT NULL DEFAULT ((0)), )ALTER TABLE from_users_roles ADD CONSTRAINT from_users_roles_pkey PRIMARY KEY ([uid], [rid])
['tablehasidentity'] = 1 or 0
Now replace the prefixes in the query 'from_' with 'to_' and the CREATE Statement is finished:
create table [to_users_roles] ( [uid] int NOT NULL DEFAULT ((0)), [rid] int NOT NULL DEFAULT ((0)), )ALTER TABLE to_users_roles ADD CONSTRAINT to_users_roles_pkey PRIMARY KEY ([uid], [rid]);
Create INSERT Statement
When you want to insert data from one table to another, you have to distinguish between two cases:
TablehasIdentity == 0
INSERT INTO to_users_roles SELECT * FROM from_users_roles
TablehasIdentity == 1
This case is a bit more complex. The statement requires a column list and IDENTITY_INSERT switched on.
DECLARE #Query nvarchar(4000)
DECLARE #columnlist nvarchar(4000)
// Result of this query e.g.: "[cid], [pid], [nid], [uid], [subject]"
SET #columnlist = (SELECT SUBSTRING((SELECT ', ' + QUOTENAME(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'from_users_roles' ORDER BY ORDINAL_POSITION FOR XML path('')), 3, 200000))
SET #query ='SET IDENTITY_INSERT to_users_roles ON; INSERT INTO to_users_roles (' + #columnlist + ') SELECT ' + #columnlist + ' FROM from_users_roles; SET IDENTITY_INSERT to_users_roles OFF'
exec sp_executesql #query;
This worked out for me pretty well.
The latest version of Azure SQL DB, now in Preview, supports the SELECT INTO syntax and no longer requires a clustered index. For a detailed description of its features, and how to use it, see http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/
After doing more research, it looks like there is no simple way to do this. You basically have to read the table's schema information and create the new table based on that.
Select into is now supported SQL DB V12. Just upgrade your server and start using the syntax.
I found a clever trick on this blog
Instead of using "select into" use "insert select".
First you have to create the destination table. To do this, right click on the source table in SQL Management Studio, and choose "Script Table as" -> "Create To" -> "New Query Window".
Then, change the name of the table in the query, and execute the query. Below is an example where I have added today's date to the new table, calling it "Entities_2015_08_24" (the old table was called "Entities"):
CREATE TABLE [dbo].[Entities_2015_08_24](
[Url] [nvarchar](max) NULL,
[ClientID] [nvarchar](max) NULL
)
Then, do a "insert select" from the old table (Entities) into the new table (Entities_2015_08_24):
INSERT INTO [dbo].[Entities_2015_08_24]
([Url]
,[ClientID]
)
SELECT
[Url]
,[ClientID]
FROM [dbo].[Entities]
Q: Did you try it?
Q: Did you look at the SQL Azure documentation
ADDENDUM
AFAIK, you cannot use select into syntax to "clone" a table in Azure SQL. Because Azure requires a clustered index, and select into has no provision for defining one.
Details, and a potential workaround, are here:
http://blogs.msdn.com/b/windowsazure/archive/2010/05/04/select-into-with-sql-azure.aspx