How do I filter a SQL query by exact match to string? - string

I'm using MS SQL 2008 and I have a status field that comes like this:
"REF CNF PCNF REL"
I need to get all the orders with status CNF without returning PCNF.
I could do it using spaces before and after WHERE STATUS LIKE '% CNF %', but if CNF is the first or last status it wouldn't work.
One solution that worked was:
WHERE
PATINDEX('CNF %',STATUS)=0 AND
PATINDEX('% CNF %',STATUS)=0 AND
PATINDEX('% CNF',STATUS)=0
But that is just horrible.
Thanks,

As said by Marc B., you should normalize your table to avoid storing more than one value in a single field.
If you don't have the credentials to do that, or if you want to keep your model as it is, you can try to add spaces before and after your string:
WHERE ' '+STATUS+' ' LIKE '% CNF %'
This way you don't have to worry about CNF being first or last item in your list.
I don't know if it's the most elegant/effective solution, but it works.

Why not simply
WHERE STATUS LIKE '% CNF%' OR STATUS LIKE 'CNF%'
?
The wildcard % matches any character(s), also none.

Using SQL 2008's own internal functions, the best I can think of is getting it down to just two conditions like:
where STATUS like 'CNF%' or STATUS like '%[^P]CNF%'
But if you were willing to install a .Net add-on, you could use regular expressions like so:
where 1 = dbo.RegExpLike(STATUS, '(CNF| CNF)')

Related

How to prevent SQL injection in InfluxDB for a user-supplied measurement

Say I have an InfluxDB query where the user supplies a measurement. In this case the user supplies the value "foo". Then I would construct the query:
SELECT * from "foo"
WHERE time > "2022-06-21T18:27:16.041Z"
What's the best way to prevent injection attacks here? I know InfluxDB supports bind parameters, but apparently that feature only works for the WHERE clause, so it wouldn't help me.
I was thinking of trying this:
const query = `
SELECT "value" FROM "${Influx.escape.measurement(key)}"
WHERE time > "2022-06-21T18:27:16.041Z"
`
...but based on my testing that function doesn't escape quotation marks, only spaces.
I'm using InfluxDB 1.x in Node.js via the influx npm package.
Before sending the value to the query try filtering chars like single and double quotes
Looking into docs, articles I can see that every subquery is in brackets, like:
select * from (select "value" from "measurement") <where_caluse>
So filtering for brackets beetwen "FROM" and "WHERE" should be enough.
Based on:
#1 https://www.influxdata.com/blog/tldr-influxdb-tech-tips-january-26-2017/
#2 https://docs.influxdata.com/influxdb/v1.7/query_language/data_exploration/#subqueries

How to visualize a count of all values in an array field in Kibana

I am having trouble creating a particular type of visualization in Kibana. My events in Kibana are statistics on communications between two ip address. Two of the fields are lists of ports used by the particular ip address. An example of the fields would be:
ip1 = 192.168.101.2
ip2 = 192.168.101.3
ip2Ports = 80,443
ip1Ports = 80,57000,0
I would like to have a top count of all the values such as
port count
80 2
57000 1
443 1
I have been able to parse ip2Ports to be ip2Ports_List.column1, ip2Ports_List.column2, ect, but I can only choose one term with term aggregation in the visualization. I can split the chart, but that leads to separate counts for each field. If I go by the original ip2Ports field, it is just aggregated as the string such as, "80,443".
Is it even possible to create a top count visualization of fields with multiple values? If so, how would I do so. If not, is there a way to restructure my data so I can do it? Thank you!
My issue stemmed from the format of the values being sent in by Logstash. I had thought that the 'ip2Ports_List.column1' format, which was a result from using the csv filter, was part of an array. It wasn't. After analyzing it, 'ip2Ports_List.column1' didn't seem to be much different from a new field.
Elastic needed an array to give me the visualization I wanted. I wasn't sure what the best way to produce it was, so I just ended up using the ruby filter. This is what the code ended up looking like:
ruby {
code => "fields = event.get('portsIp').split(',')
event.set('portsIpArray',fields)"
}
Where 'portsIp' looked something like "80,443". Splitting it turned 'portsIp' into a Ruby array. I just set that array as the value for a new event field, 'portsIpArray'.
From there when I tried visualize the 'portsIpArray' field, it looked exactly how I wanted it to, treating each port as separate value, and still associating each port with the same event/field.
Extra:
Also something I discovered is if you're writing your code like I was, directly in the Logstash conf file, Logstash doesn't like it if you use double quotes within the double quoted code. In hindsight it makes sense, but it doesn't give a clear error so it's difficult to figure out.

Delete all in FOXPRO

This question may seem rudimentary but nothing that I have found online quite fits.
I am looking at an old FOXPRO script that we used to make a table. At the moment, I am attempting to translate this script into SQL. Of note is the following,
delete all for code='000000'
pack
If I understand this correctly, it deletes all rows/records where the code field has a value of 000000. Am I correct?
"Into SQL"
What do you mean by "SQL"?
Do you mean do the same thing using SQL in VFP for a VFP table? If so then:
use myTable exclusive
delete from myTable where code = '000000'
pack
But I doubt you are asking this, when you could do it by simply using the xBase code you wrote.
Do you mean how to that in an SQL backend like MS SQL server. postgreSQL, MySQL ...? If so then:
delete from myTable where code = '000000'
Note: In your code "all" is unnecessary but wouldn't do any harm.
Note2: In VFP code that you wrote, first line is "marking" the rows as "deleted" that have code value '000000'.
Second line really removes those rows from the table.
What version of foxpro?
delete from [table] where code = '00000'
pack
or
delete for code = '000000'
pack
both should work

Using Excel to run a statement many times

I am trying to use excel to update a list of part numbers in my database:
UPDATE
stock s
SET
s.STC_AUTO_KEY = 2
WHERE s.WHS_AUTO_KEY = 2 AND
EXISTS(
SELECT
p.PNM_AUTO_KEY
FROM
PARTS_MASTER p
WHERE
s.PNM_AUTO_KEY=p.PNM_AUTO_KEY AND p.PN='102550');
UPDATE
stock s
SET
s.STC_AUTO_KEY = 2
WHERE s.WHS_AUTO_KEY = 2 AND EXISTS(
SELECT
p.PNM_AUTO_KEY
FROM
PARTS_MASTER p
WHERE
s.PNM_AUTO_KEY=p.PNM_AUTO_KEY AND p.PN='204-060-444-003');
The statements run without semicolons, but when I try to run more then one at once and use semicolons I get the error:
SQL Error [911] [22019]: ORA-00911: invalid character
java.sql.SQLSyntaxErrorException: ORA-00911: invalid character
so... it looks like I don't know how run run more then one basic statement at once.
I am using DBeaver to interact with a Oracle database.
Thanks guys, sorry if this a no-brainer.
Try adding a blank line between each update statement if possible. You can do this easily with a text editor that supports regular expressions, just replace ';\n' with ';\n\n'

Using indexed types for ElasticSearch in Titan

I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel

Resources