I have data as:
Image of data I have
I want to add flag variables in the data as:
Image of data I want
I have tried the lag function but it didn't work due to the variable being character.
I want to flag any change in string variable.Please help.
I solved this using the query along the lines of:
CREATE TEMP TABLE WANT AS(
SELECT *, CASE WHEN LAG(NAME) OVER(PARTITION BY ID ORDER BY ID) != NAME
THEN 1
ELSE 0
END AS FLAG1
FROM DATA_HAVE
ORDER BY
ID);
No judging, just sharing.
Related
I have a static prompt which is a single select. In that I have two values lets call it A and B. So when I select option 'A' my report pulls all data from the DB which is expected. So when user Select option 'B' the report should pull only the records whose code = 'M'. Here code is a column name in the report.
Note: For option 'A' I don't need to set any prompt in the report because it should pull all records by default.
Let's assume your parameter name is param and data item is named item.
Filter expression:
if (?param? = 'A')
then ([item])
else ('M')
= [item]
Note: You absolutely need to use a prompt. The result of selecting A should be to not filter.
I think I understand, try this:
Make the prompt a single value (i.e. B) with a use value of 'M'
Make the HEADER TEXT for the prompt A (so it is not an actual selection)
Make the filter optional
if the user selects A - the prompt is NULL and the optional filter is ignored
if the user selects B - the filter [Some data item] = ?YourParm? will occur
Also, if you prefer to not have header text
you can make static values A, B and modify the optional filter to be like this:
(?YourParm? <> 'M') OR ([Some data item] = ?YourParm?)
A possible solution for this question is here:
https://stackoverflow.com/a/6223961/12343395
It will probably work with a lot of work around.
But I have stored my table names in string format and want to call them as needed.
I am using Pandas read_sql_query. So as in params, I am passing, the table name and a few parameters in the WHERE section.
The WHERE section is fine, since the parameters are originally strings. But in the FROM section,
I really want the schema.table as a non-string.
Here is a snippet.
SELECT "rainfall(mm)","tmin(C)","tmax(C)","TimeStamp"
FROM crop_tables[choose_crop][0]
WHERE "District_Name" = %s AND "Season" = %s
ORDER BY "TimeStamp" ASC
where crop_tables[choose_crop][0] is 'sagita_historic.soyabean_daily_analyses' in this case.
But FROM will throw an error since it doesn't accept strings. So in essence, I wish to strip the 'sagita_historic.soyabean_daily_analyses' as a non-string.
Is it possible to do so?
Thank you.
Not sure I fully understand but maybe this will do?
SELECT "rainfall(mm)","tmin(C)","tmax(C)","TimeStamp"
FROM f"{crop_tables[choose_crop][0]}"
WHERE "District_Name" = %s AND "Season" = %s
ORDER BY "TimeStamp" ASC
I have a field named field, and I would like to see if it is null, but I get an error in the query, my code is this:
let
Condition= Excel.CurrentWorkbook(){[Name="test_table"]}[Content],
field= Condition{0}[fieldColumn],
query1="select * from students",
if field <> null then query1=query1 & " where id = '"& field &"',
exec= Oracle.Database("TESTING",[Query=query1])
in
exec
but I get an error in the condition, do you identify the mistake?
I got Expression.SyntaxError: Token Identifier expected.
You need to assign the if line to a variable. Each M line needs to start with an assignment:
let
Condition= Excel.CurrentWorkbook(){[Name="test_table"]}[Content],
field= Condition{0}[fieldColumn],
query1="select * from students",
query2 = if field <> null then query1 & " some stuff" else " some other stuff",
exec= Oracle.Database("TESTING",[Query=query2])
in
exec
In query2 you can build the select statement. I simplified it, because you also have conflicts with the double quotes.
I think you're looking for:
if Not IsNull(field) then ....
Some data types you may have to check using IsEmpty() or 'field is Not Nothing' too. Depending on the datatype and what you are using.
To troubleshoot, it's best to try to set a breakpoint and locate where the error is happening and watch the variable to prevent against that specific value.
To meet this requirement, I would build a fresh Query using the PQ UI to select the students table/view from Oracle, and then use the UI to Filter the [id] column on any value.
Then in the advanced editor I would edit the generated FilteredRows line using code from your Condition + field steps, e.g.
FilteredRows = Table.SelectRows(TESTING_students, each [id] = Excel.CurrentWorkbook(){[Name="test_table"]}{0}[fieldColumn])
This is a minor change from a generated script, rather than trying to write the whole thing from scratch.
I need to import certain information from an Excel file into an Access DB and in order to do this, I am using DAO.
The user gets the excel source file from a system, he does not need to directly interact with it. This source file has 10 columns and I would need to retrieve only certain records from it.
I am using this to retrieve all the records:
Set destinationFile = CurrentDb
Set dbtmp = OpenDatabase(sourceFile, False, True, "Excel 8.0;")
DoEvents
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536]")
My problem comes when I want to retrieve only certain records using a WHERE clause. The name of the field where I want to apply the clause is 'Date (UCT)' (remember that the user gets this source file from another system) and I can not get the WHERE clause to work on it. If I apply the WHERE clause on another field, whose name does not have ( ) or spaces, then it works. Example:
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE Other = 12925")
The previous instruction will retrieve only the number of records where the field Other has the value 12925.
Could anyone please tell me how can I achieve the same result but with a field name that has spaces and parenthesis i.e. 'Date (UCT)' ?
Thank you very much.
Octavio
Try enclosing the field name in square brackets:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = 12925
or if it's a date we are looking for:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = #02/14/13#;
To use date literal you must enclose it in # characters and write the date in MM/DD/YY format regardless of any regional settings on your machine
I am rolling up a huge table by counts into a new table, where I want to change all the empty strings to NULL, and typecast some columns as well. I read through some of the posts and I could not find a query, which would let me do it across all the columns in a single query, without using multiple statements.
Let me know if it is possible for me to iterate across all columns and replace cells with empty strings with null.
Ref: How to convert empty spaces into null values, using SQL Server?
To my knowledge there is no built-in function to replace empty strings across all columns of a table. You can write a plpgsql function to take care of that.
The following function replaces empty strings in all basic character-type columns of a given table with NULL. You can then cast to integer if the remaining strings are valid number literals.
CREATE OR REPLACE FUNCTION f_empty_text_to_null(_tbl regclass, OUT updated_rows int)
LANGUAGE plpgsql AS
$func$
DECLARE
_typ CONSTANT regtype[] := '{text, bpchar, varchar}'; -- ARRAY of all basic character types
_sql text;
BEGIN
SELECT INTO _sql -- build SQL command
'UPDATE ' || _tbl
|| E'\nSET ' || string_agg(format('%1$s = NULLIF(%1$s, '''')', col), E'\n ,')
|| E'\nWHERE ' || string_agg(col || ' = ''''', ' OR ')
FROM (
SELECT quote_ident(attname) AS col
FROM pg_attribute
WHERE attrelid = _tbl -- valid, visible, legal table name
AND attnum >= 1 -- exclude tableoid & friends
AND NOT attisdropped -- exclude dropped columns
AND NOT attnotnull -- exclude columns defined NOT NULL!
AND atttypid = ANY(_typ) -- only character types
ORDER BY attnum
) sub;
-- RAISE NOTICE '%', _sql; -- test?
-- Execute
IF _sql IS NULL THEN
updated_rows := 0; -- nothing to update
ELSE
EXECUTE _sql;
GET DIAGNOSTICS updated_rows = ROW_COUNT; -- Report number of affected rows
END IF;
END
$func$;
Call:
SELECT f_empty2null('mytable');
SELECT f_empty2null('myschema.mytable');
To also get the column name updated_rows:
SELECT * FROM f_empty2null('mytable');
db<>fiddle here
Old sqlfiddle
Major points
Table name has to be valid and visible and the calling user must have all necessary privileges. If any of these conditions are not met, the function will do nothing - i.e. nothing can be destroyed, either. I cast to the object identifier type regclass to make sure of it.
The table name can be supplied as is ('mytable'), then the search_path decides. Or schema-qualified to pick a certain schema ('myschema.mytable').
Query the system catalog to get all (character-type) columns of the table. The provided function uses these basic character types: text, bpchar, varchar, "char". Only relevant columns are processed.
Use quote_ident() or format() to sanitize column names and safeguard against SQLi.
The updated version uses the basic SQL aggregate function string_agg() to build the command string without looping, which is simpler and faster. And more elegant. :)
Has to use dynamic SQL with EXECUTE.
The updated version excludes columns defined NOT NULL and only updates each row once in a single statement, which is much faster for tables with multiple character-type columns.
Should work with any modern version of PostgreSQL. Tested with Postgres 9.1, 9.3, 9.5 and 13.