fetching compact version of JSONB in PostgreSQL - string

How to fetch compact JSONB from PostgreSQL?
All I got when fetching is with spaces:
SELECT data FROM a_table WHERE id = 1; -- data is JSONB column
{"unique": "bla bla", "foo": {"bar": {"in ...
^ ^ ^ ^ ^ --> spaces
What I want is:
{"unique":"bla bla","foo":{"bar":{"in ...

json_strip_nulls() does exactly what you're looking for:
SELECT json_build_object('a', 1);
returns
{"a" : 1}
But
SELECT json_strip_nulls(json_build_object('a', 1));
returns
{"a":1}
This function not only strips nulls as indicated by its function name and as documented, but incidentally also strips insignificant whitespace. The latter is not explicitly documented in PostgreSQL manual.
Tested in PostgreSQL 11.3, but probably works with earlier versions too.

jsonb is rendered in a standardized format on output. You would have to use json instead to preserve insignificant white space. Per documentation:
Because the json type stores an exact copy of the input text, it will
preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the
key/value pairs are kept. (The processing functions consider the last
value as the operative one.) By contrast, jsonb does not preserve
white space, does not preserve the order of object keys, and does not
keep duplicate object keys.
The whitespace really shouldn't matter for JSON values.

There is a discussion, started in 2016, about a function jsonb_compact() that will solve the problem... But, it could take years (!).
Pretty solution
  (a real solution for this question and this other one)
We must to agree with the PostgreSQL's convention for CAST(var_jsonb AS text). When you need another cast convention, for example to debug or human-readable output, the built-in jsonb_pretty() function is a good choice.
Unfortunately PostgreSQL not offers other choices, like the compact one. So, you can overload jsonb_pretty() with a compact option:
CREATE or replace FUNCTION jsonb_pretty(
jsonb, -- input
compact boolean -- true for compact format
) RETURNS text AS $$
SELECT CASE
WHEN $2=true THEN json_strip_nulls($1::json)::text
ELSE jsonb_pretty($1)
END
$$ LANGUAGE SQL IMMUTABLE;
SELECT jsonb_pretty( jsonb_build_object('a',1, 'bla','bla bla'), true );
-- results {"a":1,"bla":"bla bla"}
Rationale
The JSON standard, RFC 8259 says "... Insignificant whitespace is allowed before or after any of the six structural characters". In other words, the cast from jsonb datatype to text has no canonical form. The PostgreSQL cast convention (using spaces) is arbitrary.
A lot of applications need to minimize a big JSONb output. Two typical ones: minimizing file size of a big JSONb saved by pg_file_write(); output online in a REST interface.
The PostgreSQL team must to appreciate a real CAST procedure, not a parser, but a direct text production from JSONb internal representation.
The workaround — to remove spaces from "JSON text" — is not a simple task, it need a good parser to avoid tampering content. The solution is a parser, it is not a regular expression workaround... And in nowadays the built-in parser is json_strip_nulls(), even as "incidential behavior" parser.

Related

Acceptable encoding for Cosmos DB IDs to replace illegal characters?

I'm trying to store data in Cosmos DB where the IDs use a slash (/). However slash is an illegal character in Cosmos IDs. I initially tried to resolve this by URL encoding slashes (%2F) as that's the form I'd generally receive them in through API requests. However, though percent (%) is not an illegal character for IDs, Cosmos still chokes on them being unable to retrieve many documents with a percent in the ID (it works for some, but it appears if the % is followed by certain characters it fails).
Is there a encoding that is suitable for Cosmos DB IDs that will replace illegal characters in the original ID text without introducing illegal or unhandled characters (like %) in the encoded ID text? I'd prefer to stay away from things like Base64 which makes the IDs hard to decipher for people. And I'd also like to avoid simple character replacement (/ becomes -) in case an ID uses the replacement character.
I ended up doing simple character replacement, swapping out slashes (/) with pipes (|).
The key thing to make this livable is adding a value converter with EntityFramework.
Expression<Func<string?, string>> toDB = v => v!.Replace("/", "|");
Expression<Func<string, string?>> fromDB = v => v!.Replace("|", "/");
builder.Property(p => p.Id).HasConversion(toDB, fromDB);
This allows the character replacement to happen automatically when reading & writing to the database. The only time you need to worry about the difference is if you're accessing the database directly or from other code without the converter. Or possibly doing custom searches. I manually do the translation for a filtering framework we use, and I suspect that other id search solutions would need the same manual translation.
Ultimately I decided this was acceptable as we are unlikely to have other characters that need translation for our case, the translation is easy to do visually, and it's transparent in most cases with ValueConverters. But it isn't a general solution that would work for any possible string id.
Edit:
On second thought, this solution is deficient. Cosmos does actually allow creating documents with illegal characters in the ID, it just doesn't allow accessing or deleting them easily. An ideal solution would prevent all illegal characters across the board, whether expected or not.

Insert values with single quotes in a Postgres table column [duplicate]

I have a table test(id,name).
I need to insert values like: user's log, 'my user', customer's.
insert into test values (1,'user's log');
insert into test values (2,''my users'');
insert into test values (3,'customer's');
I am getting an error if I run any of the above statements.
If there is any method to do this correctly please share. I don't want any prepared statements.
Is it possible using sql escaping mechanism?
String literals
Escaping single quotes ' by doubling them up → '' is the standard way and works of course:
'user's log' -- incorrect syntax (unbalanced quote)
'user''s log'
Plain single quotes (ASCII / UTF-8 code 39), mind you, not backticks `, which have no special purpose in Postgres (unlike certain other RDBMS) and not double-quotes ", used for identifiers.
In old versions or if you still run with standard_conforming_strings = off or, generally, if you prepend your string with E to declare Posix escape string syntax, you can also escape with the backslash \:
E'user\'s log'
Backslash itself is escaped with another backslash. But that's generally not preferable.
If you have to deal with many single quotes or multiple layers of escaping, you can avoid quoting hell in PostgreSQL with dollar-quoted strings:
'escape '' with '''''
$$escape ' with ''$$
To further avoid confusion among dollar-quotes, add a unique token to each pair:
$token$escape ' with ''$token$
Which can be nested any number of levels:
$token2$Inner string: $token1$escape ' with ''$token1$ is nested$token2$
Pay attention if the $ character should have special meaning in your client software. You may have to escape it in addition. This is not the case with standard PostgreSQL clients like psql or pgAdmin.
That is all very useful for writing PL/pgSQL functions or ad-hoc SQL commands. It cannot alleviate the need to use prepared statements or some other method to safeguard against SQL injection in your application when user input is possible, though. #Craig's answer has more on that. More details:
SQL injection in Postgres functions vs prepared queries
Values inside Postgres
When dealing with values inside the database, there are a couple of useful functions to quote strings properly:
quote_literal() or quote_nullable() - the latter outputs the unquoted string NULL for null input.
There is also quote_ident() to double-quote strings where needed to get valid SQL identifiers.
format() with the format specifier %L is equivalent to quote_nullable().
Like: format('%L', string_var)
concat() or concat_ws() are typically no good for this purpose as those do not escape nested single quotes and backslashes.
According to PostgreSQL documentation (4.1.2.1. String Constants):
To include a single-quote character within a string constant, write
two adjacent single quotes, e.g. 'Dianne''s horse'.
See also the standard_conforming_strings parameter, which controls whether escaping with backslashes works.
This is so many worlds of bad, because your question implies that you probably have gaping SQL injection holes in your application.
You should be using parameterized statements. For Java, use PreparedStatement with placeholders. You say you don't want to use parameterised statements, but you don't explain why, and frankly it has to be a very good reason not to use them because they're the simplest, safest way to fix the problem you are trying to solve.
See Preventing SQL Injection in Java. Don't be Bobby's next victim.
There is no public function in PgJDBC for string quoting and escaping. That's partly because it might make it seem like a good idea.
There are built-in quoting functions quote_literal and quote_ident in PostgreSQL, but they are for PL/PgSQL functions that use EXECUTE. These days quote_literal is mostly obsoleted by EXECUTE ... USING, which is the parameterised version, because it's safer and easier. You cannot use them for the purpose you explain here, because they're server-side functions.
Imagine what happens if you get the value ');DROP SCHEMA public;-- from a malicious user. You'd produce:
insert into test values (1,'');DROP SCHEMA public;--');
which breaks down to two statements and a comment that gets ignored:
insert into test values (1,'');
DROP SCHEMA public;
--');
Whoops, there goes your database.
In postgresql if you want to insert values with ' in it then for this you have to give extra '
insert into test values (1,'user''s log');
insert into test values (2,'''my users''');
insert into test values (3,'customer''s');
you can use the postrgesql chr(int) function:
insert into test values (2,'|| chr(39)||'my users'||chr(39)||');
When I used Python to insert values into PostgreSQL, I also met the question: column "xxx" does not exist.
The I find the reason in wiki.postgresql:
PostgreSQL uses only single quotes for this (i.e. WHERE name = 'John'). Double quotes are used to quote system identifiers; field names, table names, etc. (i.e. WHERE "last name" = 'Smith').
MySQL uses ` (accent mark or backtick) to quote system identifiers, which is decidedly non-standard.
It means PostgreSQL can use only single quote for field names, table names, etc. So you can not use single quote in value.
My situation is: I want to insert values "the difference of it’s adj for sb and it's adj of sb" into PostgreSQL.
How I figure out this problem:
I replace ' with ’, and I replace " with '. Because PostgreSQL value does not support double quote.
So I think you can use following codes to insert values:
insert into test values (1,'user’s log');
insert into test values (2,'my users');
insert into test values (3,'customer’s');
If you need to get the work done inside Pg:
to_json(value)
https://www.postgresql.org/docs/9.3/static/functions-json.html#FUNCTIONS-JSON-TABLE
You must have to add an extra single quotes -> ' and make doubling quote them up like below examples -> ' ' is the standard way and works of course:
Wrong way: 'user's log'
Right way: 'user''s log'
problem:
insert into test values (1,'user's log');
insert into test values (2,''my users'');
insert into test values (3,'customer's');
Solutions:
insert into test values (1,'user''s log');
insert into test values (2,'''my users''');
insert into test values (3,'customer''s');

How to represent a missing xsd:dateTime in RDF?

I have a dataset with data collected from a form that contains various date and value fields. Not all fields are mandatory so blanks are possible and
in many cases expected, like a DeathDate field for a patient who is still alive.
How do I best represent these blanks in the data?
I represent DeathDate using xsd:dateTime. Blanks or empty spaces are not allowed. All of these are flagged as invalid when validating using Jena RIOT:
foo:DeathDate_1
a foo:Deathdate ;
time:inXSDDatetime " "^^xsd:dateTime .
foo:DeathDate_2
a foo:Deathdate ;
time:inXSDDatetime ""^^xsd:dateTime .
foo:DeathDate_3
a foo:Deathdate ;
time:inXSDDatetime "--"^^xsd:dateTime .
I prefer to not omit the triple because I need to know if it was blank on the source versus a conversion error during construction of my RDF.
What is the best way to code these missing values?
You should represent this by just omitting the triple. That's the meaning of a triple that's "not present": it's information that is (currently) unknown.
Alternatively, you can choose to give it the value "unknown"^^xsd:string when there's no death date. The solution in this case is to not datatype it as an xsd:dateTime, but just as a simple string. It doesn't have to be a string of course, you could use any kind of "special" value for this, e.g. a boolean false - just as long as it's a valid literal value that you can distinguish from actual death dates. This will solve the parsing problem, but IMHO if you do this, you are setting yourself up for headaches in processing the data further down the line (because you will need to ask queries over this data, and they will have to take two different types of values into account, plus the possibility that the field is missing).
I prefer to not omit the triple because I need to know if it was blank
on the source versus a conversion error during construction of my RDF.
This sounds like an XY problem. If there are conversion errors, your application should signal that in another way, e.g. by logging an error. You shouldn't try to solve this by "corrupting" your data.

Separating fields out of a string in Hive

I have the following problem...
I work with Hive and want to add a file with several (different) rows of Strings. Those contain fields with a fixed size, like this:
A20130420bcd 34 fgh
where the fields have the length 1,8,6,4,3.
Separated it would look like this:
"A,20130420,bcd,fgh"
Is there any possibility to read the String and sort it into a field besides getting it as a substring for every field like
substring(col_value,1,1) Field1
etc?
I would imagine that cutting the already read part of the string would increase the performance, but i could think of any way to do this with the given functions here.
Secondly, as stated before, there are different types of strings, ordered and identified by the first character.right now just check those with the WHERE-Statement, but it's horrible, as it runs through the whole file just to find only the first String. Is there any way to read specific lines by their number? If i know, that the first string will be of a certain kind, can read it directly?
right it looks like this:
insert overwrite table TEST
SELECT
substring(col_value,1,1) field1,
...
substring(col_value,10,3) field 5
from temp_data WHERE substring(col_value,1,1) = 'A';
any ideas on this?
I would love to hear some ideas =)
You need to write yours generic-UDF parser that output the struct or map or whatever appropriate. you can refer to UDF that output multi-values.
then you can write
insert overwrite table output
select parsed.first, parsed.second
from (
select parse(taget)
from input
) parsed
where first='X';
About second question,you may need to check "explain" command of hive to see if hive do filter push-down for you.(just see how many map reduce it takes, theoretically it should be one map, depending on 1.hive version,
2.output table format
.)
In general sense, this is why database is popular -- take optimization into consideration for you .

replaceAll quotes with backslashed quotes -- Is that enough?

I'm using replaceAll to replace single quotes with "\\\\'" per a colleague's suggestion, but I'm pretty sure that's not enough to prevent all SQL injections.
I did some googling and found this: http://wiki.postgresql.org/wiki/8.1.4_et._al._Security_Release_Technical_Info
This explains it for PostgreSQL, but does the replacing not work for all SQL managers? (Like, MySQL, for example?)
Also, I think I understand how the explanation I linked works for single backslash, but does it extend to my situation where I'm using four backslashes?
Please note that I'm not very familiar with databases and how they parse input, but this is my chance to learn more! Any insight would be appreciated.
Edit: I've gotten some really helpful, useful answers. My next question is, what kind of input would break my implementation? That is, if you give me input and I prepend all single quotes with four backslashes, what kind of input would you give me to inject SQL code? While I am convinced that my approach is naive and wrong, maybe some examples would better teach me how easy it is to inject SQL against my "prevention".
No, because what about backslashes? for instance if you turn ' into \' then the input \' will become \\' which is an unescaped single quote and a "character literal" backslash. For mysql there is mysql_real_escape_string() which should exist for every platform because its in the MySQL library bindings.
But there is another problem. And that is if you have no quote marks around the data segment. In php this looks like:
$query="select * from user where id=".$_GET[id];
The PoC exploit for this is very simple: http://localhost/vuln.php?id=sleep(10)
Even if you do a mysql_real_escape_string($_GET[id]) its still vulnerable to sqli because the attacker doesn't have to break out of quote marks in order to execute sql. The best solution is Parameterized Queries.
No.
This is not enough, and this is not the way to go. And I can say it without even knowing anything about your data, your SQL or even anything about your application. You should never, ever include any user data directly into your SQL. You should use parameterized statements instead.
Besides if you are asking this question you shouldn't write your own SQL by hand in the first place. Use a good ORM instead. Asking if your home-grown regular expression would make your application safe from SQL injection is like asking if your home-grown memory allocation routine that you have written in Assembly language is safe from buffer overruns - to which I would say: if you are asking this question then you should use a memory-safe language in the first place.
A simple case of SQL injection works like this (in pseudocode):
name = form_params["name"]
year = 2011
sql = "INSERT INTO Students (name, year) " +
"VALUES ('" + name + "', " + year + ");"
database_handle.query(sql)
year is supplied by you, the programmer, so it's not tainted, and can be embedded in the query in any way you find suitable; in this case — as an unquoted number.
But name is supplied by the user and so can be anything. Along comes Bobby Tables and inputs this value:
name = "Robert'); DROP TABLE Students; -- "
And the query becomes
INSERT INTO Students (name, year) VALUES ('Robert');
DROP TABLE Students; -- ', 2011);
That substitution turned your one query into two.
The first one gives an error because of the mismatched row count, but that doesn't matter, because the database is able to unambiguously find and run the second query. The attacker can work around the error by fiddling with the input anyway. The -- is a comment so that the rest of the input is ignored.
Note how data suddenly became code — a typical sign of a security problem.
What the suggested replacement does is this:
name = form_params["name"].regex_replace("'", "\\\\'")
How this works is confusing, hence my earlier comment. The string literal "\\\\'" represents the string \\'. The regex_replace function interprets that as the string \'. The database then sees
... VALUES ('Robert\'); DROP TABLE Students; -- ', 2011);
and interprets that correctly as a quite unusual name.
Among other problems this approach is very fragile. If the strings you use in your language don't substitute \\ as \, if your string substitution function doesn't interpret \\ as \ (if it's not a regex function or it uses $1 instead of \1 for backreferences) you could end up with an even number of slashes like
... VALUES ('Robert\\'); DROP TABLE Students; -- ', 2011);
and no SQL injection will be prevented.
The solution is not to check what the language and library does with all possible input you can think of, or to anticipate what it might do in a future version, but rather to use the facilities provided by the database. These usually come in two flavours:
database-aware escaping, which does exactly the right escaping of any data because the client library matches the server and it knows what the character encoding of the database you are querying is:
sql = "... '" + database_handle.escape(name) + "' ..."
out-of-band data submission (usually with prepared statments), so the data isn't even in the same string as the code:
sql = "... VALUES (:n, :y);"
database_handle.query(sql, n = name, y = year)

Resources