Avoiding auto expansion of columns in RawSQL - haskell

Apologize for the multiple questions on this topic. I am trying to update a column based on other columns in a table and so far nothing seems to be working. I tried updateWhere and then rawSQL with update (Ambiguous Type Error When Using RawSql Update) but both have issues.
updateWhere does not allow other columns names (only values) so that's ruled out.
I tried rawSQL with an Update but it is automatically expanding all the entity names which breaks the update. If there is a way to stop it from expanding column names (not putting ?? does not solve that problem), that will work perfectly. For example if I do: Update table SET X = Y - ? [input values] , it creates UPDATE table.f1, table.f2, etc. SET X = Y - ? [input values]
This is one of those queries that I want to run in the background as an admin so I don't care for type safety. If there is a way to blindly execute a SQL string, that will work as well.
All I want to do is : SET X = (Y - Constant). Any suggestions will be greatly appreciated.
Thanks you!

I haven't tried myself, but from reading the module documentation, I think rawExecute is what you are looking for.
You might also what to file a bug report for persistent. I don't think rawSql is supposed to do column name expansion for anything but ??. At the least, it's an omission in the documentation even if it's the desired behavior.

Related

TypeORM to_tsquery Injection prevention

I want to perform a full-text-search on 2 columns with partial queries included.
I've tried multiple options and this one seems the best to me:
Add <-> between the words of the query and :* at the end
Execute query
The problem is, that I have to execute the query in TypeORM. So when I use to_tsquery(:query) there might be invalid syntax in the query, which produces an error.
The function plainto_tsquery() would be perfect since it prevents invalid syntax in the argument, but at the same time it prevents the partial queries, which I can do as described.
Any idea how I could combine the best of the to worlds?
You could try something like
SELECT to_tsquery(quote_literal(query) || ':*')
This will add <-> between word and :* at the end of every word, while quote_literal should protect you from syntax issues by escaping the text.
Disadvantage of this method however is that the generated query might behave unexpectedly when encountering queries with symbols, e.g. o'reilley as query will yield 'o':* <-> 'reilley':* as tsquery, which likely won't give back the expected result. Unfortunately, the only solution I know for this is cleaning both the input and text data of any symbols.

merging two string variables results in empty value

I'd like to merge two string variables (STRING_VAR1 and STRING_VAR2) into one string variable STRING_ALL, such that the content of STRING_VAR1 or STRING_VAR2 is copied into STRING_ALL depending on which of those two variables contain any data (see example_dataset). If both variables STRING_VAR1 and STRING_VAR2 contain missing cases, STRING_ALL should be missing as well.
I've tried CONCAT (see code below) but that doesn't work for some reason and leaves me with only empty cases for STRING_ALL.
STRING STRING_ALL(A4)
COMPUTE STRING_ALL = CONCAT(STRING_VAR1, STRING_VAR2)
Thanks in advance!
Eli's suggestion gave you the necessary information to solve this specific issue. If you want to know why, check the Command Order topic in the SPSS Statistics Command Syntax Reference. It discusses the different types of commands and the fact that some of them, such as COMPUTE, do not take effect immediately, but are stored pending execution of a command that causes a data pass.

How do I make a WHERE clause with SQLalchemy to compare to a string?

Objective
All I am trying to do is retrieve a single record from a specific table where the primary key matches. I have a feeling I'm greatly over complicating this as it seems to be a simple enough task. I have a theory that it may not know the variable value because it isn't actually pulling it from the Python code but instead trying to find a variable by the same name in the database.
EDIT: Is it possible that I need to wrap my where clause in an expression statement?
Attempted
My Python code is
def get_single_record(name_to_search):
my_engine = super_secret_inhouse_engine_constructor("sample_data.csv")
print("Searching for " + name_to_search)
statement = my_engine.tables["Users"].select().where(my_engine.tables["Users"].c.Name == name_to_search)
# Print out the raw SQL so we can see what exactly it's checking for
print("You are about to run: " + str(statement))
# Print out each result (should only be one)
print("Results:")
for item in my_engine.execute(statement):
print(item)
I tried hard coding a string in its place.
I tried using like instead of where.
All to the same end result.
Expected
I expect it to generate something along the lines of SELECT * FROM MyTable WHERE Name='Todd'.
Actual Result
Searching for Todd
STATEMENT: SELECT "Users"."Name", ...
FROM "Users"
WHERE "Users"."Name" = ?
That is an actual question mark appearing my statement, not simply my own confusion. This is then followed by it printing out a collection of all the records from the table, as though it successfully matched everything.
EDIT 2: Running either my own hard coded SQL string or the generated query by Alchemy returns every record from the table. I'm beginning to think the issue may be with the engine I've set up not accepting the query.
Why I'm Confused
According to the official documentation and third party sources, I should be able to compare to hardcoded strings and then, by proxy, be able to compare to a variable.

Saving Spark Dataframe to csv has empty rows

LATER EDIT 2: I found the problem, I should normally delete this question as the mistake I made is not related to what I'm asking, the source of the problem was somewhere else.
There are some nuggets of knowledge in it though so I will leave it unless community decides to take it down.
LATER EDIT: So, not sure why this did not came earlier to me, the solution is to use dataframe.na.drop("all") to get rid of all the empty rows. I would still like to know why they appear though. Other filters do not create these empty lines.
I can't find any answers or hints why this happens. I suspect filter is the culprit but not sure if so, why and how to fix it.
I define a dataframe as a another dataframe filtered based on several conditions.Then I save it as csv:
var dataframe = dataframe_raw.filter($"column1" !== $"column2" || $"column3"!==$"column4").drop($"column2").drop($"column4")
dataframe.write.mode("overwrite").option("header","true").csv("hdfs:///path/to/file/")
The problem is that the output "part" file(s) contains empty rows. Any ideea why and how to remove them?
Thank you.
Note: also tried coalesce(1) that helps with saving only one file but that also contains empty rows.
I think problem is related to operators precedence in Scala. To solve this please try changing !== to ===.
dataframe_raw.filter($"column1" =!= $"column2" || $"column3"=!=$"column4")
Second option is to add parenthesis.
dataframe_raw.filter(($"column1" !== $"column2") || ($"column3"!==$"column4"))

Racket: extracting field ids from structures

I want to see if I can map Racket structure fields to columns in a DB.
I've figured out how to extract accessor functions from structures in PLT scheme using the fourth return value of:
(struct-type-info)
However the returned procedure indexes into the struct using an integer. Is there some way that I can find out what the field names were at point of definition? Looking at the documentation it seems like this information is "forgotten" after the structure is defined and exists only via the generated-accessor functions: (<id>-<field-id> s).
So I can think of two possible solutions:
Search the namespace symbols for ones that start with my struct name (yuk);
Define a custom define-struct macro that captures the ordered sequence of field-names inside some hash that is keyed by struct name (eek).
I think something along the lines of 2. is the right approach (define-struct has a LOT of knobs and many don't make sense for this) but instead of making a hash, just make your macro expand into functions that manipulate the database directly. And the syntax/struct library can help you do the parsing of the define-struct form.
The answer depends on what you want to do with this information. The thing is that it's not kept in the runtime -- it's just like bindings in functions which do not exist at runtime. But they do exist at the syntax level (= compile-time). For example, this silly example will show you the value that is kept at the syntax level that contains the structure shape:
> (define-struct foo (x y))
> (define-syntax x (begin (syntax-local-value #'foo) 1))
> (define-syntax x (begin (printf ">>> ~s\n" (syntax-local-value #'foo)) 1))
>>> #<checked-struct-info>
It's not showing much, of course, but this should be a good start (you can look for struct-info in the docs and in the code). But this might not be what you're looking for, since this information exists only at the syntax level. If you want something that is there at runtime, then perhaps you're better off using alists or hash tables?
UPDATE (I've skimmed too quickly over your question before):
To map a struct into a DB table row, you'll need more things defined: at least hold the DB and the fields it stand for, possibly an open DB connection to store values into or read values from. So it looks to me like the best way to do that is via a macro anyway -- this macro would expand to a use of define-struct with everything else that you'd need to keep around.

Resources