I'm working on a website in node js and use SQLite as a database for the first time.
I want to be able to use real for some form data and I noticed that every real in my database are converted to integer once the query is made.
To vizualize the database i am using DB Browser and i checked if the columns are defined as REAL which they are.
If i try to query a data set as 0.1 in my DB I get this :
sqlite> select step_variable
from variables
where id=38;
0.0
After trying as suggested the command TYPEOF(step_variable) it returned :
0.0|real
In the SQLite CREATE TABLE command, one defines a data type affinity, not a data type. SQLite supports the following five column affinities: TEXT, NUMERIC, INTEGER, REAL, NONE.
Thus the data type you specify when creating a table does not enforce a certain data type. You can supply any data type you want or even omit the data type.
CREATE TABLE table1(
column1 ABC,
column2 Others,
column3 WHATEVER);
CREATE TABLE table2(column1, column2, column3);
Populate tables:
INSERT INTO table1 VALUES( 1, 'my text', 123.45);
INSERT INTO table2 VALUES( 1, 'my text', 123.45);
Now let us check what SQLite made out of it:
SELECT column1, TYPEOF(column1) from table1
SELECT column2, TYPEOF(column1) from table1
SELECT column3, TYPEOF(column1) from table1
With:
column TYPEOF(column)
------------------------
1 INTEGER
my text TEXT
123.45 REAL
When you go through a query result e.g. by using sqlite2_step you can use the sqlite3_column_type statement to confirm the column type - unless you know the result anyway and simply cast the result to the data type expected.
Martin
I found the solution it was simply that i didn't save my file after modifying it.
Related
Context: I've a data flow that extracts data from SQL DB, when data comes is just one column with a string separated by tab, in order to manipulate the data properly, I've tried to separate every single column with its corresponding data:
Firstly, to 'rebuild' the table properly I used a 'Derived Column' activity replacing tab with semicolons instead (1)
dropLeft(regexReplace(regexReplace(regexReplace(descripcion,[\t],';'),[\n],';'),[\r],';'),1)
So, after that use 'split()' function to get an array and build the columns (2)
split(descripcion, ';')
Problem: When I try to use 'Flatten' activity (as here https://learn.microsoft.com/en-us/azure/data-factory/data-flow-flatten), is just not working and data flow throws me just one column or if I add an additional column in the 'Flatten' activity I just get another column with the same data that the first one:
Expected output:
column2
column1
column3
2000017
ENVASE CORONA CLARA 24/355 ML GRAB
PC13
2004297
ENVASE V FAM GRAB 12/940 ML USADO
PC15
Could you say me what i'm doing wrong, guys? thanks by the way.
You can use the derived column activity itself, try as below.
After the first derived column, what you have is a string array which can just be split again using derived schema modifier.
Where firstc represent the source column equivalent to your column descripcion
Column1: split(firstc, ';')[1]
Column2: split(firstc, ';')[2]
Column3: split(firstc, ';')[3]
Optionally you can select the columns you need to write to SQL sink
There is a number of Cassandra built-in functions, for example now() or uuid(). Is it possible to call those functions without a SELECT operation, using CQL? So far I have to do
SELECT count(*), uuid() from table;
Where table is a table that's always empty.
Is there a better way?
Unfortunately no, you cannot call functions like uuid() and now() without executing a query/upsert. But I do have a way to keep you from having to maintain an empty table.
SELECT uuid() FROM system.local;
system.local will:
A) always be there.
and
B) only ever contain a single row.
It's similar to what you're doing now, but again, it prevents you from having to maintain an empty table just to gen-up a UUID.
Upsert example
If I have a table like this:
CREATE TABLE timetest (
yearmonth TEXT,
id UUID,
value TEXT,
PRIMARY KEY (yearmonth, id));
I can INSERT to it and gen a new UUID like this:
INSERT INTO timetest (yearmonth,id,value)
VALUES ('201601',uuid(),'v1');
If user is entering 678.98 then it should display only 678 in one of the field of a window.
I have declare that field as numeric in database and number type reference in tables and columns in openbravo.
Use Math.floor()
import java.lang.Math;
...
value = 678.98;
roundeddown = Math.floor(value); // roundeddown == 678
See also this page about another rounding example.
Working directly with PostgreSql, use the floor() function
To insert data
INSERT INTO table ... VALUES( floor(678.98) ) ...
To select data
SELECT floor( field ) FROM table WHERE ....
Set Integer type reference in tables and columns in openbravo.
Short version: Is it possible to query for all timeuuid columns corresponding to a particular date?
More details:
I have a table defined as follows:
CREATE TABLE timetest(
key uuid,
activation_time timeuuid,
value text,
PRIMARY KEY(key,activation_time)
);
I have populated this with a single row, as follows (f0532ef0-2a15-11e3-b292-51843b245f21 is a timeuuid corresponding to the date 2013-09-30 22:19:06+0100):
insert into timetest (key, activation_time, value) VALUES (7daecb80-29b0-11e3-92ec-e291eb9d325e, f0532ef0-2a15-11e3-b292-51843b245f21, 'some value');
And I can query for that row as follows:
select activation_time,dateof(activation_time) from timetest where key=7daecb80-29b0-11e3-92ec-e291eb9d325e
which results in the following (using cqlsh)
activation_time | dateof(activation_time)
--------------------------------------+--------------------------
f0532ef0-2a15-11e3-b292-51843b245f21 | 2013-09-30 22:19:06+0100
Now lets assume there's a lot of data in my table and I want to retrieve all rows where activation_time corresponds to a particular date, say 2013-09-30 22:19:06+0100.
I would have expected to be able to query for the range of all timeuuids between minTimeuuid('2013-09-30 22:19:06+0100') and maxTimeuuid('2013-09-30 22:19:06+0100') but this doesn't seem possible (the following query returns zero rows):
select * from timetest where key=7daecb80-29b0-11e3-92ec-e291eb9d325e and activation_time>minTimeuuid('2013-09-30 22:19:06+0100') and activation_time<=maxTimeuuid('2013-09-30 22:19:06+0100');
It seems I need to use a hack whereby I increment the second date in my query (by a second) to catch the row(s), i.e.,
select * from timetest where key=7daecb80-29b0-11e3-92ec-e291eb9d325e and activation_time>minTimeuuid('2013-09-30 22:19:06+0100') and activation_time<=maxTimeuuid('2013-09-30 22:19:07+0100');
This feels wrong. Am I missing something? Is there a cleaner way to do this?
The CQL documentation discusses timeuuid functions but it's pretty short on gte/lte expressions with timeuuids, beyond:
The min/maxTimeuuid example selects all rows where the timeuuid column, t, is strictly later than 2013-01-01 00:05+0000 but strictly earlier than 2013-02-02 10:00+0000. The t >= maxTimeuuid('2013-01-01 00:05+0000') does not select a timeuuid generated exactly at 2013-01-01 00:05+0000 and is essentially equivalent to t > maxTimeuuid('2013-01-01 00:05+0000').
p.s. the following query also returns zero rows:
select * from timetest where key=7daecb80-29b0-11e3-92ec-e291eb9d325e and activation_time<=maxTimeuuid('2013-09-30 22:19:06+0100');
and the following query returns the row(s):
select * from timetest where key=7daecb80-29b0-11e3-92ec-e291eb9d325e and activation_time>minTimeuuid('2013-09-30 22:19:06+0100');
I'm sure the problem is that cqlsh does not display milliseconds for your timestamps
So the real timestamp is something like '2013-09-30 22:19:06.123+0100'
When you call maxTimeuuid('2013-09-30 22:19:06+0100') as milliseconds are missing, zero is assumed so it is the same as calling maxTimeuuid('2013-09-30 22:19:06.000+0100')
And as 22:19:06.123 > 22:19:06.000 that causes record to be filtered out.
Not directly related to answer but as an additional addon to #dimas answer.
cqlsh (version 5.0.1) seem to show the miliseconds now
system.dateof(id)
---------------------------------
2016-06-03 02:42:09.990000+0000
2016-05-28 17:07:30.244000+0000
I am trying to read in a table called operations that looks like so
"id";"name";
"1";"LASER CUTTING";
"2";"DEBURR";
"3";"MACHINING";
"4";"BENDING";
"5";"PEM";
"6";"WELDING";
"7";"PAINT PREPARATION";
"8";"PAINTING";
"9";"SILKSCREEN PREPARATION";
"10";"SILKSCREEN";
"11";"ASSEMBLY - PACKAGING";
"12";"LASER PREP";
I want to have a column in a worksheet that gets the appropriate name based on value of an operation_id column in another worksheet.
How do I lookup a particular cell in another worksheet dependent on the value of a cell?
Example
userid, operation_id, operation_name
bob, 3, MACHINING
You should look at the DGET(database,field,criteria) function, reference here.
Or you can use this worksheet function:
VLOOKUP(cellWithID, Sheet2!A1:B13, 2, FALSE)
where cellWithID is the cell with the ID value you want to use.
Maybe the Lookup() function would work better for you.
http://www.techonthenet.com/excel/formulas/lookup.php
Basing off this: "What I really want to do is just lookup the name for an operation without having to run a sql query every time or have an ugly huge if statement in every cell."
I guess what confuses me here is why you don't just use a join. You can always join that table as a lookup to whatever your sql statement is;
select operations.name, tableA.* from tableA
left outer join operations on operations.id = tableA.operationid
If you wanted to, you could functionize this; not recommended. Subqueries are generally speaking bad news. However,
create function dbo.LookupOperationName
(
#id int
)
returns varchar(100)
as
declare #returnvalue varchar(100)
select #returnvalue = name from Operations where id = #id
return #returnValue
would do the trick. Then you could:
select tablea.*,LookupOperationName(operationid) from tablea
Again, remember that your join example is much more performant. You could also create a view that had the join, and use the view in place of the table.... all kinds of things.