How can I write a query with an order by clause when the field I'm ordering by is based on the value of another column? - subquery

I'm trying to write a query where the order by clause depends on the value of a key named sortBy in a JSON column.
const userId = req.params.id
SELECT * FROM users
WHERE ...
ORDER BY (SELECT JSON_EXTRACT(preferences, '$.sortBy') FROM users where id=${userId})
I think the problem is that the subquery, which gets the value "last_active", which is one of the columns in the user table, is a text string enclosed in quotes, but the order by clause requires a string without quotes?

Related

PostgreSQL query where column is represented as a string

I'm using Retool, and trying to run a query where the column value comes from a drop-down list. The value output is a string, so my query looks like this:
select * from accounts where {{dropDownList.value}} ilike {{'%' + account_search_textInput.value + '%'}}
When the query runs, it is as follows:
select * from accounts where "first_name" ilike '%Adam%';
The double quotes around the column name first_name seem to be causing an issue but I don't think I can remove them. Is there any other way to successfully run the query where first_name can represent the column name rather than a string value?

SQLite returns int instead of real

I'm working on a website in node js and use SQLite as a database for the first time.
I want to be able to use real for some form data and I noticed that every real in my database are converted to integer once the query is made.
To vizualize the database i am using DB Browser and i checked if the columns are defined as REAL which they are.
If i try to query a data set as 0.1 in my DB I get this :
sqlite> select step_variable
from variables
where id=38;
0.0
After trying as suggested the command TYPEOF(step_variable) it returned :
0.0|real
In the SQLite CREATE TABLE command, one defines a data type affinity, not a data type. SQLite supports the following five column affinities: TEXT, NUMERIC, INTEGER, REAL, NONE.
Thus the data type you specify when creating a table does not enforce a certain data type. You can supply any data type you want or even omit the data type.
CREATE TABLE table1(
column1 ABC,
column2 Others,
column3 WHATEVER);
CREATE TABLE table2(column1, column2, column3);
Populate tables:
INSERT INTO table1 VALUES( 1, 'my text', 123.45);
INSERT INTO table2 VALUES( 1, 'my text', 123.45);
Now let us check what SQLite made out of it:
SELECT column1, TYPEOF(column1) from table1
SELECT column2, TYPEOF(column1) from table1
SELECT column3, TYPEOF(column1) from table1
With:
column TYPEOF(column)
------------------------
1 INTEGER
my text TEXT
123.45 REAL
When you go through a query result e.g. by using sqlite2_step you can use the sqlite3_column_type statement to confirm the column type - unless you know the result anyway and simply cast the result to the data type expected.
Martin
I found the solution it was simply that i didn't save my file after modifying it.

Hive ORC table empty string

I have a Hive table whit data stored as ORC.
I write in some fields empty values (blank, '"") but sometimes when I run a select query on this table the empty string columns are shown as NULL in the query result.
I would like to see the empty values I entered, how is this possible?
If you want to see, empty values for NULL in hive table, then you can use NVL function, which can help you to produce default values for NULL column values.
Below is syntax,
NVL(arg1, arg2) - here argument 1 is expression or column and arg2 is default value for
NULL values.
e.g. Query - SELECT NVL(blank,'') as blank_1 AS FROM db.table;

Geting prompt values from another query through functions

I am a beginner in cognos 10. I want to know how to get the value of a prompt from a Query1 and use it on Query2. My requirement is I want a prompt to ask the year for which I want the data in Query1 and in Query2 I want the data relevant to the previous year to that entered in the prompt. How to do that?
You can use the same parameter (tied to the prompt) in filters in both queries. If your parameter is Parameter1 and contains a four-digit year, and your data item in your filter is [Year] then your Query1 filter might look like this:
[Year] = ?Parameter1?
Your Query2 filter would be:
[Year] = ?Parameter1? - 1
Depending on your data source you may have to cast the string parameter to an integer before doing the subtraction though most SQL implementations will implicitly convert the string parameter to an integer for you.

Cassandra-secondary index on part of the composite key?

I am using a composite primary key consisting of 2 strings Name1, Name2, and a timestamp (e.g. 'Joe:Smith:123456'). I want to query a range of timestamps given an equality condition for either Name1 or Name2.
For example, in SQL:
SELECT * FROM testcf WHERE (timestamp > 111111 AND timestamp < 222222 and Name2 = 'Brown');
and
SELECT * FROM testcf WHERE (timestamp > 111111 AND timestamp < 222222 and Name1 = 'Charlie);
From my understanding, the first part of the composite key is the partition key, so the second query is possible, but the first query would require some kind of index on Name2.
Is it possible to create a separate index on a component of the composite key? Or am I misunderstanding something here?
You will need to manually create and maintain an index of names if you want to use your schema and support the first query. Given this requirement, I question your choice in data model. Your model should be designed with your read pattern in mind. I presume you are also storing some column values as well that you want to query by timestamp. If so, perhaps the following model would serve you better:
"[current_day]:Joe:Smith" {
123456:Field1 : value
123456:Field2 : value
123450:Field1 : value
123450:Field2 : value
}
With this model you can use the current day (or some known day) as a sentinel value, then filter on first and last names. You can also get a range of columns by timestamp using the composite column names.

Resources