In my Informix database we are using an TEXT field, but we like to update it and start to use BSON.
I know I can uses INSERT clause like that: INSERT bson_field VALUE('{field:value}'::JSON), and this works well, but when I try to do:
UPDATE table SET bson_field = text_field::JSON that doesn't work.
Error: No cast from text to json. SQLState: IX000 ErrorCode:
-9634
Of course, the text in text_field is a correct JSON.
Someone knows a way to convert, with a stored procedure or function?
You can use bson functions instead.
GenBSON is a good one.
But study the manual product, because there are several options.
Related
At the end of the pipeline I wanted to write the below query
INSERT INTO [TestTable] (Job_name, status) VALUES (Job_name, current_timestamp()).
Jobname will be passed as a parameter.
Can this be written in Lookup, please let me know.
Definitely possible, but should you write massive inserts/scripts within a lookup... probably not a great idea, but see below (Truncate Example, but will work the same with an insert)
I use this method for small things like truncating a table, but never with big code that should be stored as source in the DB.
EDIT:
If you need to pass parameters or Variables into the lookup you should use string interpolation like so:
INSERT INTO dbo.MyTable
SELECT '#{variables('YourVariable')}' as Variable1,
'#{pipeline().Pipeline} as PipelineName
I have a Cassandra table which includes a user-defined type. Using CassandraTemplate from Spring Data Cassandra, I want to update a single field of that UDT. This doesn't seem possible.
I have tried this:
database.update(query(
where("party_id").is(partyId)).and(where("relationship_id").is(relationshipId)),
update("address.address_line_1", "this field was updated"),
Address.class);
This throws:
Query error after 3 ms: UPDATE current_addresses_by_party SET "address.address_line_1"=? WHERE party_id=? AND relationship_id=?;com.datastax.driver.core.exceptions.InvalidQueryException: Undefined column name "address.address_line_1"
Running the CQL given in the error output without the quotes works. I don't know if there's a way to get Spring to execute this statement without putting the column name in quotes.
In a fit of optimism I also tried using the syntax for map types:
database.update(query(
where("party_id").is(partyId)).and(where("relationship_id").is(relationshipId)),
Update.empty().set("address").atKey("address_line_1").to("this field was updated"),
Address.class)
This resulted in the error you would expect: the field is not a map.
Is there a way to do what I want with CassandraTemplate without resorting to direct CQL? If CassandraTemplate lacks this feature, it would be great if the devs added it.
I was surprised that I couldn't find anyone else wanting to do this. Maybe I'm doing something completely wrong? I'm fairly new to Cassandra.
How to fetch compact JSONB from PostgreSQL?
All I got when fetching is with spaces:
SELECT data FROM a_table WHERE id = 1; -- data is JSONB column
{"unique": "bla bla", "foo": {"bar": {"in ...
^ ^ ^ ^ ^ --> spaces
What I want is:
{"unique":"bla bla","foo":{"bar":{"in ...
json_strip_nulls() does exactly what you're looking for:
SELECT json_build_object('a', 1);
returns
{"a" : 1}
But
SELECT json_strip_nulls(json_build_object('a', 1));
returns
{"a":1}
This function not only strips nulls as indicated by its function name and as documented, but incidentally also strips insignificant whitespace. The latter is not explicitly documented in PostgreSQL manual.
Tested in PostgreSQL 11.3, but probably works with earlier versions too.
jsonb is rendered in a standardized format on output. You would have to use json instead to preserve insignificant white space. Per documentation:
Because the json type stores an exact copy of the input text, it will
preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the
key/value pairs are kept. (The processing functions consider the last
value as the operative one.) By contrast, jsonb does not preserve
white space, does not preserve the order of object keys, and does not
keep duplicate object keys.
The whitespace really shouldn't matter for JSON values.
There is a discussion, started in 2016, about a function jsonb_compact() that will solve the problem... But, it could take years (!).
Pretty solution
(a real solution for this question and this other one)
We must to agree with the PostgreSQL's convention for CAST(var_jsonb AS text). When you need another cast convention, for example to debug or human-readable output, the built-in jsonb_pretty() function is a good choice.
Unfortunately PostgreSQL not offers other choices, like the compact one. So, you can overload jsonb_pretty() with a compact option:
CREATE or replace FUNCTION jsonb_pretty(
jsonb, -- input
compact boolean -- true for compact format
) RETURNS text AS $$
SELECT CASE
WHEN $2=true THEN json_strip_nulls($1::json)::text
ELSE jsonb_pretty($1)
END
$$ LANGUAGE SQL IMMUTABLE;
SELECT jsonb_pretty( jsonb_build_object('a',1, 'bla','bla bla'), true );
-- results {"a":1,"bla":"bla bla"}
Rationale
The JSON standard, RFC 8259 says "... Insignificant whitespace is allowed before or after any of the six structural characters". In other words, the cast from jsonb datatype to text has no canonical form. The PostgreSQL cast convention (using spaces) is arbitrary.
A lot of applications need to minimize a big JSONb output. Two typical ones: minimizing file size of a big JSONb saved by pg_file_write(); output online in a REST interface.
The PostgreSQL team must to appreciate a real CAST procedure, not a parser, but a direct text production from JSONb internal representation.
The workaround — to remove spaces from "JSON text" — is not a simple task, it need a good parser to avoid tampering content. The solution is a parser, it is not a regular expression workaround... And in nowadays the built-in parser is json_strip_nulls(), even as "incidential behavior" parser.
I have a function module (FM) in SAP and I call it externally using startRFC. The only output of FM is one internal table. This table has only 1 column of type char(100) and I need to get it to text file. StartRFC works well, but if there is diacritics (for example Czech language: ěščřžýáíé) instead of these characters only hashes # appear.
Have someone ever solved similar issue?
If I call the same algorithm manually and write strings on screen in SAP, everything is ok. But startRFC somehow destroys it. The problem may be in the data transfer between SAP and startRFC. But I don't know how this transfer works.
I found a solution but it is terribly slow. It converts string to hexadecimal string using "gcl_conv_to_x->write" and "gcl_conv_to_x->get_buffer" than calls "SCMS_XSTRING_TO_BINARY" and you need a binary table. But it takes 5minutes to do all this stuff. Without this conversion my algorithm takes 15 seconds.
So finally a solution...
You need to create XSTRING variable and fill it with your text. To convert STRING to XSTRING use FM: SCMS_STRING_TO_XSTRING.
Then you will need an internal table with row type BAPICONTEN. It already contains component (column) of type SDOK_SDATX (RAW 1022).
And you just append a new line to this table like this:
data: my_table_row LIKE LINE OF my_table.
my_table_row-line = my_xstring.
APPEND my_table_row INTO my_table.
This table (my_table) can be returned via RFC and will contain Cyrillic, German characters etc..
I am just a beginner, so do not ask me how to create the table, please :)
I need to get the data from cassandra with out case sensitive. Please help me.
There is no case-sensitivity concept in Cassandra. All the data is stored as byte[], so it's not even a String.
You can make a custom comparator (see the API) which transforms byte[] to String and disregards case.
The other thing to do is just get the data and transform it on the client side.
Actually, your question is quite unclear as of what is your goal, so I can't give more details.
Update: Run a one-time job that fetches all records from the db and updates them, setting to lower-case. Then continue inserting everything with lowercase.
This has been resolved if you have SOLR enabled using:
CREATE SEARCH INDEX ON tableName WITH COLUMNS *, camelCaseColumn { lowerCase : true };
An index is created that allows the select statement to use lowercase in the where clause. For more details search for LowerCaseStrField.