Will you please help to check this?
CREATE TABLE Sales
(
Category NVARCHAR(30),
Region NVARCHAR(30),
Amount MONEY
)
DELETE FROM Sales
INSERT INTO Sales
VALUES
('X','1',24),
('X','2',NULL),
('X','3',165),
('X','4',36),
('Y','1',38),
('Y','2',181),
('Y','3',287),
('Y','4',NULL),
('Z','1',83),
('Z','2',55),
('Z','3',33),
('Z','4',44)
DECLARE #SQLStr NVARCHAR(MAX)
SELECT #SQLStr = COALESCE(#SQLStr + ',' ,'')+ [a].[Column]
FROM (SELECT DISTINCT Region AS [Column]
FROM Sales) AS a
SET #SQLStr = 'SELECT Category, ' + #SQLStr + ' FROM (SELECT Category, Region, Amount FROM Sales) sq '
+ ' PIVOT (SUM(Amount) FOR Region IN (' + #SQLStr + ')) AS pt'
PRINT #SQLStr
EXEC sp_executesql #SQLStr
I get an error
Msg 102, Level 15, State 1, Line 35
Incorrect syntax near '1'.
Initially, the column 'Region' is INT type (values are 1, 2, 3, 4)
The same error, then I change its type to NVARCHAR(30), but the same error happens. Please help.
In reality, I want to transpose the vertical product usage data to horizontal.
Please see the image link
Please see this screen shot, this is what I really want to achieve
As you see, the date is transposed to horizontal. I want to populate the usage data to horizontal way for my analysis. Please help
The column header can be date? or have to be string?
please add brackets with column name like below
DECLARE #SQLStr NVARCHAR(MAX)
SELECT #SQLStr = COALESCE(#SQLStr + ',' ,'')+ +'['+[a].[Column]+']'
FROM (SELECT DISTINCT Region AS [Column]
FROM #Sales) AS a
You should always place square brackets around the column names []:
From MSDN:
The first character must be one of the following:
A letter as defined by the Unicode Standard 3.2. The Unicode definition of letters includes Latin characters from a through z, from A through Z, and also letter characters from other languages.
The underscore (_), at sign (#), or number sign (#).
If it is anything else, then you will need to use the square brackets. []
Related
I have a string containing a certain number of words (it may vary from 1 to many) and I need to find the records of a table which contains ALL those words in any order.
For instances, suppose that my input string is 'yellow blue red' and I have a table with the following records:
1 yellow brown white
2 red blue yellow
3 black blue red
The query should return the record 2.
I know that the basic approach should be something similar to this:
select * from mytable where colors like '%yellow%' and colors like '%blue%' and colors like '%red%'
However I am not being able to figure out how turn the words of the string into separate like parameters.
I have this code that splits the words of the string into a table, but now I am stuck:
DECLARE #mystring varchar(max) = 'yellow blue red';
DECLARE #terms TABLE (term varchar(max));
INSERT INTO #terms
SELECT Split.a.value('.', 'NVARCHAR(MAX)') term FROM (SELECT CAST('<X>'+REPLACE(#mystring, ' ', '</X><X>')+'</X>' AS XML) AS String) AS A CROSS APPLY String.nodes('/X') AS Split(a)
SELECT * FROM #terms
Any idea?
First, put that XML junk in a function:
CREATE FUNCTION dbo.SplitThem
(
#List NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN ( SELECT Item = y.i.value(N'(./text())[1]', N'nvarchar(4000)')
FROM ( SELECT x = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i));
Now you can extract the words in the table, join them to the words in the input string, and discard any that don't have the same count:
DECLARE #mystring varchar(max) = 'red yellow blue';
;WITH src AS
(
SELECT t.id, t.colors, fc = f.c, tc = COUNT(t.id)
FROM dbo.mytable AS t
CROSS APPLY dbo.SplitThem(t.colors, ' ') AS s
INNER JOIN (SELECT Item, c = COUNT(*) OVER()
FROM dbo.SplitThem(#mystring, ' ')) AS f
ON s.Item = f.Item
GROUP BY t.id, t.colors, f.c
)
SELECT * FROM src
WHERE fc = tc;
Output:
id
colors
fc
tc
2
red blue yellow
3
3
Example db<>fiddle
This disregards any possibility of duplicates on either side and ignores the larger overarching issue that this is the least optimal way possible to store sets of things. You have a relational database, use it! Surely you don't think the tags on this question are stored somewhere as the literal string
string sql-server-2012 sql-like
Of course not, these question:tag relationships are stored in a, well, relational table. Splitting strings is for the birds and those with all kinds of CPU and time to spare.
If you are storing a delimited list in a single column then you really need to normalize it out into a separate table.
But assuming you actually want to just do multiple free-form LIKE comparisons, you can do them against a list of values:
select *
from mytable t
where not exists (select 1
from (values
('%yellow%'),
('%blue%'),
('%red%')
) v(search)
where t.colors not like v.search
);
Ideally you should pass these values through as a Table Valued Parameter, then you just put that into your query
select *
from mytable t
where not exists (select 1
from #tmp v
where t.colors not like v.search
);
If you want to simulate an OR semantic rather than AND the change not exists to exists and not like to like.
I am trying to use U-SQL to remove duplicate, null,'',and Nan cells in a specific column called "Function" of a csv file. I also want to keep the Product column correctly aligned with the Function column after the blank rows are removed. So i would want to remove the same rows in the Product column as I do in the Function column to keep them properly aligned. I want to only keep one occurrence of a duplicate Function row. In this case I only want to keep the very first occurrence. The Product column has no empty cells and has all unique values. Any help is greatly appreciated. I know this can be done in a much easier way, but I want to use the code to automate the process as the Data in the DataLake changes over time. I think I am somewhat close in the code i currently have. The actual data set is a very large file and I am fairly certain that there are at least 4 duplicate values in the Functions column that aren't simply empty cells. I need to eliminate both duplicate values and empty cells in the Function column because empty cells are being recognized as duplicates as well. I want to be able to use the Function values as a primary key in the next step of my school project that wont include the Product column.
DECLARE #inputfile string = "/input/Function.csv";
//DECLARE #OutputUserFile string = "/output/Test_Function/UniqueFunction.csv";
#RawData =
EXTRACT Function string,
Product string
FROM #inputfile
USING Extractors.Csv(encoding: Encoding.[ASCII]);
// Query from Function data
// Set ROW_NUMBER() of each row within the window partitioned by Function field
#RawDataDuplicates=
SELECT ROW_NUMBER() OVER (PARTITION BY Function) AS RowNum, Function AS function
FROM #RawData;
// ORDER BY Function to see duplicate rows next to one another
#RawDataDuplicates2=
SELECT *
FROM #RawDataDuplicates
ORDER BY function
OFFSET 0 ROWS;
// Write to File
//OUTPUT #RawDataDuplicates2
//TO "/output/Test_Function/FunctionOver-Dups.csv"
//USING Outputters.Csv();
// GROUP BY and count # of duplicates per Function
#groupBy = SELECT Function, COUNT(Function) AS FunctionCount
FROM #RawData
GROUP BY Function
ORDER BY Function
OFFSET 0 ROWS;
// Write to file
//OUTPUT #groupBy
//TO "/output/Test_Function/FunctionGroupBy-Dups.csv"
//USING Outputters.Csv();
#RawDataDuplicates3 =
SELECT *
FROM #RawDataDuplicates2
WHERE RowNum == 1;
OUTPUT #RawDataDuplicates3
TO "/output/Test_Function/FunctionUniqueEmail.csv"
USING Outputters.Csv(outputHeader: true);
//OUTPUT #RawData
//TO #OutputUserFile
//USING Outputters.Csv(outputHeader: true);
I have also commented out some code that I don't necessarily need. When I run the code as it is, I am currently getting this error: this E_CSC_USER_REDUNDANTSTATEMENTINSCRIPT, Error Message: This statement is dead code.. –
It does not give a line number but likely the "Function AS function" line?
Here is a sample file that is a small slice of the full spreadsheet and only includes data in the 2 relevant columns. The full spreadsheet has data in all columns.
https://www.dropbox.com/s/auu2aco4b037xn7/Function.csv?dl=0
here is a screenshot of the output I get when I follow wBob's advice and click.
You can apply a series of transformations to your data using string functions like .Length and ranking function like ROW_NUMBER to remove the records you want, for example:
#input =
EXTRACT
CompanyID string,
division string,
store_location string,
International_Id string,
Function string,
office_location string,
address string,
Product string,
Revenue string,
sales_goal string,
Manager string,
Country string
FROM "/input/input142.csv"
USING Extractors.Csv(skipFirstNRows : 1 );
// Remove empty columns
#working =
SELECT *
FROM #input
WHERE Function.Length > 0;
// Rank the columns by Function and keep only the first one
#working =
SELECT CompanyID,
division,
store_location,
International_Id,
Function,
office_location,
address,
Product,
Revenue,
sales_goal,
Manager,
Country
FROM
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY Function ORDER BY Product) AS rn
FROM #working
) AS x
WHERE rn == 1;
#output = SELECT * FROM #working;
OUTPUT #output TO "/output/output.csv"
USING Outputters.Csv(quoting:false);
My results:
I need to add columns to a database and every second column does not have a name, i wanted to give it a generic "x" name but i get the sqlite3.OperationalError: duplicate column name: x error,
for meci in ech1:
c.execute("ALTER TABLE Aranjate ADD COLUMN "+ ech1[ii] +" INT")
c.execute("ALTER TABLE Aranjate ADD COLUMN x INT")
c.execute("ALTER TABLE Aranjate ADD COLUMN "+ ech2[ii] +" INT")
conn.commit()
ii = ii +1
and i tried to replace x with x = str(ii) so it will not have the same name and insert it as variable:
c.execute("ALTER TABLE Aranjate ADD COLUMN " + x + " INT")
but i suppose that sqlite does not accept integers as table names as i get the error sqlite3.OperationalError: near "0": syntax error where 0 is the first x
It will not be a problem for me if those columns are named the same as all i will do with this table is export it as a csv file
SQLite does not allow column names to be the same because then that would defeat the purpose of a database, you would have contradicting data of the same property for each row.
SQLite also does not allow column names to start with digits, so you can't have 0one as column name, but zero1 is fine. However, you can add brackets around it to make it valid.
Try:
c.execute("ALTER TABLE Aranjate ADD COLUMN [" + x + "] INT")
As stated in the topic, I want to have a conditioned subset of an internal
table inside another internal table.
Let us first look, what it may look like the old fashioned way.
DATA: lt_hugeresult TYPE tty_mytype,
lt_reducedresult TYPE tty_mytype.
SELECT "whatever" FROM "wherever"
INTO CORRESPONDING FIELDS OF TABLE lt_hugeresult
WHERE "any_wherecondition".
IF sy-subrc = 0.
lt_reducedresult[] = lt_hugeresult[].
DELETE lt_reducedresult WHERE col1 EQ 'a value'
AND col2 NE 'another value'
AND col3 EQ 'third value'.
.
.
.
ENDIF.
We all may know this.
Now I was reading about the table reducing stuff, which is introduced
with abap 7.40, appearently SP8.
Table Comprehensions – Building Tables Functionally
Table-driven:
VALUE tabletype( FOR line IN tab WHERE ( … )
( … line-… … line-… … )
)
For each selected line in the source table(s), construct a line in the result table. Generalization of value constructor from static to dynamic number of lines.
I was experimenting with that, but the results seem not really to fit,
perhaps I am doing it wrong, or I might even need the condition-driven approach.
So, how would it look like, if I want to write the above statement with table comprehension techniques ?
Until now I have this, delivering not that, what I need, and I have seen, that
it seems, as if the "not equal" is not possible...
DATA(reduced) = VALUE tty_mytype( FOR checkline IN lt_hugeresult
WHERE ( col1 = 'a value' )
( col2 = 'another value' )
( col3 = space )
).
Anyone having some hints ?
EDIT: Seems still not to work. Here is, as I do it:
Executable line:
Debugger results:
Wrong Reduced:
And what now ???
You could use the FILTER operator with the EXCEPT WHERE addition to filter out any rows that match the where clause:
lt_reducedresult = FILTER # ( lt_hugeresult EXCEPT WHERE col1 = 'a value'
AND col2 <> 'another value'
AND col3 = 'a third value' ).
Note that lt_hugeresult would have to be a sorted table, and the col1/col2/col3 need to be key components (you can specify a secondary key using the USING KEY addition).
The documentation for FILTER explicitly notes that:
Table filtering can also be performed using a table comprehension or a table reduction with an iteration expression for table iterations with FOR. The operator FILTER provides a shortened format for this special case and is more efficient to execute.
A table filter constructs the result row by row. If the result contains almost all rows in the source table, this method can be slower than copying the source table and deleting the surplus rows from the target table.
So your approach of using DELETE might actually be appropriate depending on the size of the table.
The Table Iterations may be a lot confusing when you use WHERE, because of parenthesis groups.
The "NOT EQUAL" condition is very well supported, as I show below in the solution of your first example. The issue you observe is due to misproper use of parenthesis groups.
You must absolutely define the whole logical expression after WHERE Inside ONE parenthesis group (one, or several elementary conditions separated by logical operators AND, OR, etc.)
After the parenthesis group for WHERE, you define usually only one parenthesis group which corresponds to the line to be added to the target internal table. You may define subsequent parenthesis groups, if for each line in the source internal table, you want to add several lines in the target internal table.
In your example, only the first parenthesis group applies to WHERE (either col1 = 'a value' in your first example, or insplot = _ilnum in your second example).
The subsequent parenthesis groups correspond to the lines to be added, i.e. 2 lines are added for each source line in the first example (one line with col2 = 'another value', and one line with col3 = space), and 3 lines are added for each source line in the second example (one line with inspoper = i_evaluation-inspoper, one line with inspchar = i_evaluation-inspchar, one line corresponding to the line of _single_results).
So, you should write your code as follows.
First example :
DATA(reduced) = VALUE tty_mytype( FOR checkline IN lt_hugeresult
WHERE ( col1 = 'a value'
AND col2 <> 'another value'
AND col3 = 'third value'
)
( checkline )
).
Second example :
DATA(singres) = VALUE tbapi2045d4( FOR checkline IN _single_results
WHERE ( insplot = _ilnum
AND inspoper = i_evaluation-inspoper
AND inspchar = i_evaluation-inspchar
)
( checkline )
).
I compared old-fashioned syntax of your above example with table comprehension technique and got exactly the same result.
Actually, your sample is not functional because it lacks row specification for constructed table reduced.
Try this one, which worked for me.
DATA(reduced) = VALUE tty_mytype( FOR checkline IN lt_hugeresult
WHERE ( col1 = 'a value' AND
col2 = 'another value' AND
col3 = space )
( checkline )
).
In the above sample we have the most basic type of result row specification where is is absolutely similar to source table. More sophisticated examples, where new table rows are evaluated with table iterations, can be found here.
I want a sub query which returns columns from different tables
for example
i am writing the code in the way similar to below
Use North Wind Select *,(Select Order Id FROM dbo. Orders OI WHERE
OI.OrderID IN (Select OI.OrderID FROM [dbo].[Order Details] OD WHERE
OD.UnitPrice=P.UnitPrice))AS 'ColumName' FROM Products P
ERROR : Msg 512, Level 16, State 1, Line 1 Sub query returned more
than 1 value. This is not permitted when the subquery follows =, !=,
<, <= , >, >= or when the subquery is used as an expression.
Whats the Mistake in this code
please reply soon
Saradhi
Select Order Id FROM dbo. Orders OI WHERE OI.OrderID IN (Select OI.OrderID FROM [dbo].[Order Details] OD WHERE OD.UnitPrice=P.UnitPrice)
This query is returning more than one OrderId while it should be returning only one. See if your data is correct.