How to cast a value 52:35 which is stored as a string in the DB and I want only 52 in BigInt value in PostgreSQL DB as a query.
I tried with this following query
select cast(substr(a,1,strpos(a,':')-1) AS bigint) as value from abc
which returned me an error "negative substring length not allowed"
That query will fail when it encounters a value that does not contain a colon :. Use a case...when...else...end construct to attempt the extraction only when the value contains a colon. Something like (untested)
CASE WHEN strpos(a,':') > 0
THEN cast(substr(a,1,strpos(a,':')-1) AS bigint
else null
END
For the else case, substitute whatever you need. There might also be a way to use split_part(...) instead of the above, but I had trouble finding documentation saying what happens if the delimiter is not present.
You could use split_part(string text, delimiter text, field int)
postgres=# select split_part('52:35', ':', 1)::bigint;
split_part
------------
52
postgres=# select split_part('52', ':', 1)::bigint;
split_part
------------
52
Related
I am getting weird results when using a spark SQL statements like:
select * from mytab where somecol NOT IN ('ABC','DEF')
If I set somecol to ABC it returns nothing. If I set it to XXX it returns a row.
However, if I leave the column blank, like ,, in the CSV data (so the value is read as null), it still does not return anything, even though null is not in the list of values.
This remains the case even if re-written as NOT(somecol IN ('ABC','DEF')).
I feel like this is to do with comparisons between null and strings, but I am not sure what to do about null column values that end up in IN or NOT IN clauses.
Do I need to convert them to empty strings first?
You can put explicit check for nulls in query as null comparison returns unknown in spark details here
select * from mytab where somecol NOT IN ('ABC','DEF') or somecol is null
I want to cast string into Integer.
I have a table like this.
Have:
ID Salary
1 "$1,000"
2 "$2,000"
Want:
ID Salary
1 1000
2 2000
My query
Select Id, cast(substring(Salary,2, length(salary)) as int)
from have
I am getting error.
ERROR: invalid input syntax for type integer: "1,000"
SQL state: 22P02
Can anyone please provide some guidance on this.
Remove all non-digit characters, then you cast it to an integer:
regexp_replace(salary, '[^0-9]+', '', 'g')::int
But instead of trying to convert the value every time you select it, fix your database design and convert the column to a proper integer. Never store numbers in text columns.
alter table bad_design
alter salary type int using regexp_replace(salary, '[^0-9]+', '', 'g')::int;
I have a table with a few key columns created with nvarchar(80) => unicode.
I can list the full dataset with SELECT * statement (Table1) and can confirm the values I need to filter are there.
However, I can't get any results from that table if I filter rows by using as input an alphabet char on any column.
Columns in table1 stores values in cyrilic characters.
I know it must have to do with character encoding => what I see in the result list is not what I use as input characters.
Unicode nvarchar type should resolve automatically this character type mismatch.
What do you suggest me to do in order to get results?
Thank you very much.
Paulo
I am creating a sqlite3 table that accepts records from a server. There should be one date/text column that also has a datetime DEFAULT value, so I can sync a record which times differ from the server's record.
I found a solution on this forum from here. The problem is it gives me the following error on executing the table creation script: sqlite3.OperationalError: default value of column [updated_at] is not constant.
Table is created:
cur.execute('CREATE TABLE IF NOT EXISTS emp_tb(\
emp_id INTEGER PRIMARY KEY NOT NULL,\
emp_names TEXT NOT NULL,\
emp_number TEXT NOT NULL UNIQUE,\
ent_id INTEGER NOT NULL,\
active INTEGER NOT NULL DEFAULT "0",\
updated_at TEXT NULL DEFAULT (datetime("now", "localtime")),\
syncstatus INTEGER NOT NULL DEFAULT "0")')
Should I create a trigger? or How can I have a default value in format ("YYYY-MM-DD HH:MM:SS.SSS") in case the update misses a spot?
Use single quotes (') for the datetime options. As mentioned in the comments, they will have to be escaped (because the query is delimited with single quotes).
I have one column in my table in Postgres let's say employeeId. We do some modification based on the employee type and store it in DB. Basically, we append strings from these 4 strings ('ACR','AC','DCR','DC'). Now we can have any combination of these 4 strings appended after employeeId. For example, EMPIDACRDC, EMPIDDCDCRAC etc. These are valid combinations. I need to retrieve EMPID from this. EMPID length is not fixed. The column is of varying length type. How can this be done in Postgres?
I am not entirely sure I understand the question, but regexp_replace() seems to do the trick:
with sample (employeeid) as (
values
('1ACR'),
('2ACRDCR'),
('100DCRAC')
)
select employeeid,
regexp_replace(employeeid, 'ACR|AC|DCR|DC.*$', '', 'gi') as clean_id
from sample
returns:
employeeid | clean_id
-----------+---------
1ACR | 1
2ACRDCR | 2
100DCRAC | 100
The regular expression says "any character after any of those string up to the end of the string" - and that is then replace with nothing. This however won't work if the actual empid contains any of those codes that are appended.
It would be much cleaner to store this information in two columns. One for the empid and one for those "codes"